Test Report: Docker_Linux 22054

                    
                      83cf6fd59e5d8f3d63346b28bfbd6fd8e1f567be:2025-12-07:42677
                    
                

Test fail (11/434)

x
+
TestFunctional/parallel/DashboardCmd (302s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-304107 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-304107 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-304107 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-304107 --alsologtostderr -v=1] stderr:
I1207 22:38:12.856255  448213 out.go:360] Setting OutFile to fd 1 ...
I1207 22:38:12.856550  448213 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1207 22:38:12.856562  448213 out.go:374] Setting ErrFile to fd 2...
I1207 22:38:12.856569  448213 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1207 22:38:12.856823  448213 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-393577/.minikube/bin
I1207 22:38:12.857051  448213 mustload.go:66] Loading cluster: functional-304107
I1207 22:38:12.857473  448213 config.go:182] Loaded profile config "functional-304107": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1207 22:38:12.858003  448213 cli_runner.go:164] Run: docker container inspect functional-304107 --format={{.State.Status}}
I1207 22:38:12.878154  448213 host.go:66] Checking if "functional-304107" exists ...
I1207 22:38:12.878428  448213 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1207 22:38:12.944543  448213 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-07 22:38:12.932689259 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1207 22:38:12.944680  448213 api_server.go:166] Checking apiserver status ...
I1207 22:38:12.944727  448213 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1207 22:38:12.944762  448213 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-304107
I1207 22:38:12.965428  448213 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33162 SSHKeyPath:/home/jenkins/minikube-integration/22054-393577/.minikube/machines/functional-304107/id_rsa Username:docker}
I1207 22:38:13.073992  448213 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/9173/cgroup
W1207 22:38:13.084626  448213 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/9173/cgroup: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1207 22:38:13.084683  448213 ssh_runner.go:195] Run: ls
I1207 22:38:13.089341  448213 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
I1207 22:38:13.095574  448213 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
ok
W1207 22:38:13.095658  448213 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I1207 22:38:13.095881  448213 config.go:182] Loaded profile config "functional-304107": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1207 22:38:13.095915  448213 addons.go:70] Setting dashboard=true in profile "functional-304107"
I1207 22:38:13.095930  448213 addons.go:239] Setting addon dashboard=true in "functional-304107"
I1207 22:38:13.095971  448213 host.go:66] Checking if "functional-304107" exists ...
I1207 22:38:13.096492  448213 cli_runner.go:164] Run: docker container inspect functional-304107 --format={{.State.Status}}
I1207 22:38:13.120289  448213 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I1207 22:38:13.121696  448213 out.go:179]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I1207 22:38:13.123060  448213 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
I1207 22:38:13.123082  448213 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I1207 22:38:13.123149  448213 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-304107
I1207 22:38:13.143790  448213 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33162 SSHKeyPath:/home/jenkins/minikube-integration/22054-393577/.minikube/machines/functional-304107/id_rsa Username:docker}
I1207 22:38:13.252720  448213 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I1207 22:38:13.252746  448213 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I1207 22:38:13.266839  448213 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I1207 22:38:13.266867  448213 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I1207 22:38:13.282195  448213 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I1207 22:38:13.282221  448213 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I1207 22:38:13.296522  448213 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
I1207 22:38:13.296548  448213 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I1207 22:38:13.311052  448213 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
I1207 22:38:13.311081  448213 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I1207 22:38:13.325810  448213 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I1207 22:38:13.325838  448213 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I1207 22:38:13.340937  448213 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
I1207 22:38:13.340966  448213 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I1207 22:38:13.356632  448213 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
I1207 22:38:13.356659  448213 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I1207 22:38:13.372962  448213 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
I1207 22:38:13.372987  448213 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I1207 22:38:13.387042  448213 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1207 22:38:13.917423  448213 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-304107 addons enable metrics-server

                                                
                                                
I1207 22:38:13.918723  448213 addons.go:202] Writing out "functional-304107" config to set dashboard=true...
W1207 22:38:13.918984  448213 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I1207 22:38:13.919557  448213 kapi.go:59] client config for functional-304107: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22054-393577/.minikube/profiles/functional-304107/client.crt", KeyFile:"/home/jenkins/minikube-integration/22054-393577/.minikube/profiles/functional-304107/client.key", CAFile:"/home/jenkins/minikube-integration/22054-393577/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28156e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1207 22:38:13.920037  448213 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1207 22:38:13.920069  448213 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1207 22:38:13.920077  448213 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1207 22:38:13.920086  448213 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1207 22:38:13.920091  448213 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1207 22:38:13.929247  448213 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  99bce6f0-9463-4018-b0e3-5f0a2c287ce8 879 0 2025-12-07 22:38:13 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-12-07 22:38:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.96.232.143,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.96.232.143],IPFamilies:[IPv4],AllocateLoadBalance
rNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W1207 22:38:13.929380  448213 out.go:285] * Launching proxy ...
* Launching proxy ...
I1207 22:38:13.929430  448213 dashboard.go:154] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-304107 proxy --port 36195]
I1207 22:38:13.929712  448213 dashboard.go:159] Waiting for kubectl to output host:port ...
I1207 22:38:13.984055  448213 dashboard.go:177] proxy stdout: Starting to serve on 127.0.0.1:36195
W1207 22:38:13.984135  448213 out.go:285] * Verifying proxy health ...
* Verifying proxy health ...
I1207 22:38:13.993839  448213 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[cf26a0c6-c6d9-4300-989f-fecadade19cf] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:38:13 GMT]] Body:0xc0008c6700 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00014e8c0 TLS:<nil>}
I1207 22:38:13.993949  448213 retry.go:31] will retry after 59.863µs: Temporary Error: unexpected response code: 503
I1207 22:38:13.997657  448213 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b8b03d91-3852-40dc-995f-1ae459adc857] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:38:13 GMT]] Body:0xc0009ab680 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000208c80 TLS:<nil>}
I1207 22:38:13.997732  448213 retry.go:31] will retry after 117.573µs: Temporary Error: unexpected response code: 503
I1207 22:38:14.001333  448213 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[15395964-f34d-4781-ab85-b24cc1416519] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:38:14 GMT]] Body:0xc000938840 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000368000 TLS:<nil>}
I1207 22:38:14.001403  448213 retry.go:31] will retry after 114.049µs: Temporary Error: unexpected response code: 503
I1207 22:38:14.004847  448213 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7c7ca8db-bebe-452e-b597-e3c8bfb316b9] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:38:14 GMT]] Body:0xc0009ab800 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00014ea00 TLS:<nil>}
I1207 22:38:14.004908  448213 retry.go:31] will retry after 249.432µs: Temporary Error: unexpected response code: 503
I1207 22:38:14.008175  448213 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e236c3af-b832-4d94-9577-451b2c3bbf51] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:38:14 GMT]] Body:0xc0008c6800 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000368140 TLS:<nil>}
I1207 22:38:14.008230  448213 retry.go:31] will retry after 666.034µs: Temporary Error: unexpected response code: 503
I1207 22:38:14.011825  448213 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f3a28c11-ec72-47b2-9741-8272946e0747] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:38:14 GMT]] Body:0xc000938a00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000208f00 TLS:<nil>}
I1207 22:38:14.011879  448213 retry.go:31] will retry after 1.01759ms: Temporary Error: unexpected response code: 503
I1207 22:38:14.015186  448213 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[aaa914bf-59b8-4f15-8d2f-55620db50abf] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:38:14 GMT]] Body:0xc0009abd40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00014eb40 TLS:<nil>}
I1207 22:38:14.015265  448213 retry.go:31] will retry after 1.037014ms: Temporary Error: unexpected response code: 503
I1207 22:38:14.018464  448213 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ee66e53f-7067-4b0e-bba6-25924699b13a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:38:14 GMT]] Body:0xc0008c68c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000368280 TLS:<nil>}
I1207 22:38:14.018521  448213 retry.go:31] will retry after 2.336945ms: Temporary Error: unexpected response code: 503
I1207 22:38:14.023782  448213 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[92ecbf7a-ead6-4ccd-9f08-e205fb1a73ee] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:38:14 GMT]] Body:0xc0009abe40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000209180 TLS:<nil>}
I1207 22:38:14.023839  448213 retry.go:31] will retry after 2.412507ms: Temporary Error: unexpected response code: 503
I1207 22:38:14.029031  448213 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6a56f758-03a3-4475-9acc-57e84e783f6b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:38:14 GMT]] Body:0xc000938b40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003683c0 TLS:<nil>}
I1207 22:38:14.029080  448213 retry.go:31] will retry after 3.97432ms: Temporary Error: unexpected response code: 503
I1207 22:38:14.035410  448213 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[20d456c4-2534-4a2d-8929-5581e50358b1] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:38:14 GMT]] Body:0xc0009abf40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00014ec80 TLS:<nil>}
I1207 22:38:14.035460  448213 retry.go:31] will retry after 3.305187ms: Temporary Error: unexpected response code: 503
I1207 22:38:14.041957  448213 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[79a38600-7ceb-46ca-a22d-2058ad3cf893] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:38:14 GMT]] Body:0xc0008c69c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000368500 TLS:<nil>}
I1207 22:38:14.042016  448213 retry.go:31] will retry after 8.790709ms: Temporary Error: unexpected response code: 503
I1207 22:38:14.054180  448213 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[49322e45-81b9-4723-b31c-80c2642adf4d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:38:14 GMT]] Body:0xc0008c6a80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002092c0 TLS:<nil>}
I1207 22:38:14.054263  448213 retry.go:31] will retry after 15.059473ms: Temporary Error: unexpected response code: 503
I1207 22:38:14.072157  448213 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[40e6be51-293e-43a3-8cca-fefb28bca097] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:38:14 GMT]] Body:0xc000445b80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000209400 TLS:<nil>}
I1207 22:38:14.072231  448213 retry.go:31] will retry after 24.1118ms: Temporary Error: unexpected response code: 503
I1207 22:38:14.099292  448213 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[daade65d-5190-4bb9-b8c1-371a4e794aed] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:38:14 GMT]] Body:0xc000445d00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002d4500 TLS:<nil>}
I1207 22:38:14.099380  448213 retry.go:31] will retry after 37.685925ms: Temporary Error: unexpected response code: 503
I1207 22:38:14.140700  448213 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ac355170-a902-4029-9693-5b1c36cf1b47] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:38:14 GMT]] Body:0xc000445e80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002d4780 TLS:<nil>}
I1207 22:38:14.140774  448213 retry.go:31] will retry after 36.15274ms: Temporary Error: unexpected response code: 503
I1207 22:38:14.181066  448213 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[83a91787-371f-4ef7-8f7f-d01a28f3bb27] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:38:14 GMT]] Body:0xc0008c6b80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002d48c0 TLS:<nil>}
I1207 22:38:14.181170  448213 retry.go:31] will retry after 45.443472ms: Temporary Error: unexpected response code: 503
I1207 22:38:14.230920  448213 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7f8fc8b6-4635-472f-beec-971d8fe3d364] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:38:14 GMT]] Body:0xc001722000 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000209540 TLS:<nil>}
I1207 22:38:14.230991  448213 retry.go:31] will retry after 71.96418ms: Temporary Error: unexpected response code: 503
I1207 22:38:14.306805  448213 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d66362c3-905a-43f0-9011-6d14d90743dc] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:38:14 GMT]] Body:0xc000938d40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002d4a00 TLS:<nil>}
I1207 22:38:14.306884  448213 retry.go:31] will retry after 210.770262ms: Temporary Error: unexpected response code: 503
I1207 22:38:14.521296  448213 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[505be831-14a1-465c-b430-0d3ec50ea905] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:38:14 GMT]] Body:0xc001722100 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00014ef00 TLS:<nil>}
I1207 22:38:14.521387  448213 retry.go:31] will retry after 317.141075ms: Temporary Error: unexpected response code: 503
I1207 22:38:14.842099  448213 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a6d71b68-ec62-41e6-9567-24fb7dbdd057] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:38:14 GMT]] Body:0xc000938e80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002d4b40 TLS:<nil>}
I1207 22:38:14.842177  448213 retry.go:31] will retry after 304.927123ms: Temporary Error: unexpected response code: 503
I1207 22:38:15.151031  448213 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[601f3e03-1332-4166-8ef6-682c9075cec1] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:38:15 GMT]] Body:0xc001722200 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00014f040 TLS:<nil>}
I1207 22:38:15.151111  448213 retry.go:31] will retry after 317.466762ms: Temporary Error: unexpected response code: 503
I1207 22:38:15.473581  448213 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d5fd1c9d-6adb-4aa2-bc9e-3481af6d391d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:38:15 GMT]] Body:0xc000939000 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002d4c80 TLS:<nil>}
I1207 22:38:15.473685  448213 retry.go:31] will retry after 971.021144ms: Temporary Error: unexpected response code: 503
I1207 22:38:16.448569  448213 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[dfe408eb-9c88-49cc-aa1e-f714b76be5bf] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:38:16 GMT]] Body:0xc0008c6f40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00014f180 TLS:<nil>}
I1207 22:38:16.448650  448213 retry.go:31] will retry after 995.666431ms: Temporary Error: unexpected response code: 503
I1207 22:38:17.447680  448213 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[72e8da47-6a7d-4144-bd1f-0b07b0f5c1ac] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:38:17 GMT]] Body:0xc001722300 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000209680 TLS:<nil>}
I1207 22:38:17.447755  448213 retry.go:31] will retry after 1.120590543s: Temporary Error: unexpected response code: 503
I1207 22:38:18.572054  448213 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4a91fc17-84c5-4ef0-ab78-7f274a0200b9] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:38:18 GMT]] Body:0xc0018980c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002d4dc0 TLS:<nil>}
I1207 22:38:18.572134  448213 retry.go:31] will retry after 2.604835681s: Temporary Error: unexpected response code: 503
I1207 22:38:21.182730  448213 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ccc03670-4cc1-437e-bb92-d655d39587cc] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:38:21 GMT]] Body:0xc000939100 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002097c0 TLS:<nil>}
I1207 22:38:21.182804  448213 retry.go:31] will retry after 2.530331176s: Temporary Error: unexpected response code: 503
I1207 22:38:23.717422  448213 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c453747e-9cf9-459b-82bd-267304549f71] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:38:23 GMT]] Body:0xc0018981c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00014f2c0 TLS:<nil>}
I1207 22:38:23.717498  448213 retry.go:31] will retry after 2.935087579s: Temporary Error: unexpected response code: 503
I1207 22:38:26.656257  448213 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a69b956f-5843-48f6-af57-5094f30b4610] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:38:26 GMT]] Body:0xc001898240 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00014f400 TLS:<nil>}
I1207 22:38:26.656357  448213 retry.go:31] will retry after 7.498770579s: Temporary Error: unexpected response code: 503
I1207 22:38:34.159052  448213 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1d82ca55-9535-4e64-8944-9e13676a6141] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:38:34 GMT]] Body:0xc001898340 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000209900 TLS:<nil>}
I1207 22:38:34.159128  448213 retry.go:31] will retry after 18.354015196s: Temporary Error: unexpected response code: 503
I1207 22:38:52.520090  448213 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[550c8f02-b429-4b7c-b896-7935da6e23de] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:38:52 GMT]] Body:0xc000939280 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000209a40 TLS:<nil>}
I1207 22:38:52.520166  448213 retry.go:31] will retry after 17.8186629s: Temporary Error: unexpected response code: 503
I1207 22:39:10.344489  448213 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8252a00b-f57b-41c4-bd30-854e20978bc0] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:39:10 GMT]] Body:0xc001722440 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00014f680 TLS:<nil>}
I1207 22:39:10.344576  448213 retry.go:31] will retry after 25.771615049s: Temporary Error: unexpected response code: 503
I1207 22:39:36.119623  448213 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4cec0bd9-067b-454e-af56-bebca988a8bd] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:39:36 GMT]] Body:0xc0017224c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00014f7c0 TLS:<nil>}
I1207 22:39:36.119707  448213 retry.go:31] will retry after 29.676097463s: Temporary Error: unexpected response code: 503
I1207 22:40:05.799705  448213 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[300296ff-3bda-441e-ba0f-38709ce6105c] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:40:05 GMT]] Body:0xc001722540 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00014f900 TLS:<nil>}
I1207 22:40:05.799786  448213 retry.go:31] will retry after 36.297604507s: Temporary Error: unexpected response code: 503
I1207 22:40:42.102205  448213 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1d213b29-fdab-458e-9f67-1b7e4d8c6321] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:40:42 GMT]] Body:0xc001722040 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000368640 TLS:<nil>}
I1207 22:40:42.102305  448213 retry.go:31] will retry after 53.053244496s: Temporary Error: unexpected response code: 503
I1207 22:41:35.159121  448213 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[bcfc5801-1fe8-4c2e-a4b9-1995efd8cac8] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:41:35 GMT]] Body:0xc001722140 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00014e140 TLS:<nil>}
I1207 22:41:35.159207  448213 retry.go:31] will retry after 48.259824487s: Temporary Error: unexpected response code: 503
I1207 22:42:23.423420  448213 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3f964db3-6505-4852-bd39-4be7b391b245] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sun, 07 Dec 2025 22:42:23 GMT]] Body:0xc00181a140 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00014e280 TLS:<nil>}
I1207 22:42:23.423517  448213 retry.go:31] will retry after 1m16.127328834s: Temporary Error: unexpected response code: 503
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-304107
helpers_test.go:243: (dbg) docker inspect functional-304107:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "769725322f76d88988e043b1070348920134aa3ad078d15289d551e08a685fb9",
	        "Created": "2025-12-07T22:35:17.716324358Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 428375,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-07T22:35:17.756587169Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
	        "ResolvConfPath": "/var/lib/docker/containers/769725322f76d88988e043b1070348920134aa3ad078d15289d551e08a685fb9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/769725322f76d88988e043b1070348920134aa3ad078d15289d551e08a685fb9/hostname",
	        "HostsPath": "/var/lib/docker/containers/769725322f76d88988e043b1070348920134aa3ad078d15289d551e08a685fb9/hosts",
	        "LogPath": "/var/lib/docker/containers/769725322f76d88988e043b1070348920134aa3ad078d15289d551e08a685fb9/769725322f76d88988e043b1070348920134aa3ad078d15289d551e08a685fb9-json.log",
	        "Name": "/functional-304107",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-304107:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-304107",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "769725322f76d88988e043b1070348920134aa3ad078d15289d551e08a685fb9",
	                "LowerDir": "/var/lib/docker/overlay2/217300164bc4977a7c5d3e80bed0f494a3eb7d2123ea021d2e19e11a0ffb582c-init/diff:/var/lib/docker/overlay2/72e2c0d34d3438044c6ca8754190358557351efc0aeb527bd1060ce52e748152/diff",
	                "MergedDir": "/var/lib/docker/overlay2/217300164bc4977a7c5d3e80bed0f494a3eb7d2123ea021d2e19e11a0ffb582c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/217300164bc4977a7c5d3e80bed0f494a3eb7d2123ea021d2e19e11a0ffb582c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/217300164bc4977a7c5d3e80bed0f494a3eb7d2123ea021d2e19e11a0ffb582c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-304107",
	                "Source": "/var/lib/docker/volumes/functional-304107/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-304107",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-304107",
	                "name.minikube.sigs.k8s.io": "functional-304107",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "e3ef7a0b3f8947e2c4fff10e59f55e8dc43d75595ece1feeea31d83e45513ae7",
	            "SandboxKey": "/var/run/docker/netns/e3ef7a0b3f89",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33162"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33163"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33166"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33164"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33165"
	                    }
	                ]
	            },
	            "Networks": {
	                "functional-304107": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4f86ce54efb9121df084309ae3492628b6ce2282fe48f7117090c21b5dae7084",
	                    "EndpointID": "3682bfeabb8df07590c63050c4c59c5ed08fee3a520ae01b51f1dfeef06b031a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "5e:c9:74:e6:1c:3f",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-304107",
	                        "769725322f76"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-304107 -n functional-304107
helpers_test.go:252: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-304107 logs -n 25: (1.049680895s)
helpers_test.go:260: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                            ARGS                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-304107 ssh findmnt -T /mount1                                                                                   │ functional-304107 │ jenkins │ v1.37.0 │ 07 Dec 25 22:38 UTC │                     │
	│ mount          │ -p functional-304107 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4148170977/001:/mount2 --alsologtostderr -v=1         │ functional-304107 │ jenkins │ v1.37.0 │ 07 Dec 25 22:38 UTC │                     │
	│ mount          │ -p functional-304107 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4148170977/001:/mount1 --alsologtostderr -v=1         │ functional-304107 │ jenkins │ v1.37.0 │ 07 Dec 25 22:38 UTC │                     │
	│ ssh            │ functional-304107 ssh findmnt -T /mount1                                                                                   │ functional-304107 │ jenkins │ v1.37.0 │ 07 Dec 25 22:38 UTC │ 07 Dec 25 22:38 UTC │
	│ ssh            │ functional-304107 ssh findmnt -T /mount2                                                                                   │ functional-304107 │ jenkins │ v1.37.0 │ 07 Dec 25 22:38 UTC │ 07 Dec 25 22:38 UTC │
	│ ssh            │ functional-304107 ssh findmnt -T /mount3                                                                                   │ functional-304107 │ jenkins │ v1.37.0 │ 07 Dec 25 22:38 UTC │ 07 Dec 25 22:38 UTC │
	│ mount          │ -p functional-304107 --kill=true                                                                                           │ functional-304107 │ jenkins │ v1.37.0 │ 07 Dec 25 22:38 UTC │                     │
	│ cp             │ functional-304107 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                         │ functional-304107 │ jenkins │ v1.37.0 │ 07 Dec 25 22:38 UTC │ 07 Dec 25 22:38 UTC │
	│ ssh            │ functional-304107 ssh -n functional-304107 sudo cat /home/docker/cp-test.txt                                               │ functional-304107 │ jenkins │ v1.37.0 │ 07 Dec 25 22:38 UTC │ 07 Dec 25 22:38 UTC │
	│ cp             │ functional-304107 cp functional-304107:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1873481159/001/cp-test.txt │ functional-304107 │ jenkins │ v1.37.0 │ 07 Dec 25 22:38 UTC │ 07 Dec 25 22:38 UTC │
	│ ssh            │ functional-304107 ssh -n functional-304107 sudo cat /home/docker/cp-test.txt                                               │ functional-304107 │ jenkins │ v1.37.0 │ 07 Dec 25 22:38 UTC │ 07 Dec 25 22:38 UTC │
	│ cp             │ functional-304107 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                  │ functional-304107 │ jenkins │ v1.37.0 │ 07 Dec 25 22:38 UTC │ 07 Dec 25 22:38 UTC │
	│ ssh            │ functional-304107 ssh -n functional-304107 sudo cat /tmp/does/not/exist/cp-test.txt                                        │ functional-304107 │ jenkins │ v1.37.0 │ 07 Dec 25 22:38 UTC │ 07 Dec 25 22:38 UTC │
	│ ssh            │ functional-304107 ssh echo hello                                                                                           │ functional-304107 │ jenkins │ v1.37.0 │ 07 Dec 25 22:38 UTC │ 07 Dec 25 22:38 UTC │
	│ ssh            │ functional-304107 ssh cat /etc/hostname                                                                                    │ functional-304107 │ jenkins │ v1.37.0 │ 07 Dec 25 22:38 UTC │ 07 Dec 25 22:38 UTC │
	│ image          │ functional-304107 image ls --format short --alsologtostderr                                                                │ functional-304107 │ jenkins │ v1.37.0 │ 07 Dec 25 22:38 UTC │ 07 Dec 25 22:38 UTC │
	│ image          │ functional-304107 image ls --format yaml --alsologtostderr                                                                 │ functional-304107 │ jenkins │ v1.37.0 │ 07 Dec 25 22:38 UTC │ 07 Dec 25 22:38 UTC │
	│ ssh            │ functional-304107 ssh pgrep buildkitd                                                                                      │ functional-304107 │ jenkins │ v1.37.0 │ 07 Dec 25 22:38 UTC │                     │
	│ image          │ functional-304107 image ls --format json --alsologtostderr                                                                 │ functional-304107 │ jenkins │ v1.37.0 │ 07 Dec 25 22:38 UTC │ 07 Dec 25 22:38 UTC │
	│ image          │ functional-304107 image build -t localhost/my-image:functional-304107 testdata/build --alsologtostderr                     │ functional-304107 │ jenkins │ v1.37.0 │ 07 Dec 25 22:38 UTC │ 07 Dec 25 22:38 UTC │
	│ image          │ functional-304107 image ls --format table --alsologtostderr                                                                │ functional-304107 │ jenkins │ v1.37.0 │ 07 Dec 25 22:38 UTC │ 07 Dec 25 22:38 UTC │
	│ update-context │ functional-304107 update-context --alsologtostderr -v=2                                                                    │ functional-304107 │ jenkins │ v1.37.0 │ 07 Dec 25 22:38 UTC │ 07 Dec 25 22:38 UTC │
	│ update-context │ functional-304107 update-context --alsologtostderr -v=2                                                                    │ functional-304107 │ jenkins │ v1.37.0 │ 07 Dec 25 22:38 UTC │ 07 Dec 25 22:38 UTC │
	│ update-context │ functional-304107 update-context --alsologtostderr -v=2                                                                    │ functional-304107 │ jenkins │ v1.37.0 │ 07 Dec 25 22:38 UTC │ 07 Dec 25 22:38 UTC │
	│ image          │ functional-304107 image ls                                                                                                 │ functional-304107 │ jenkins │ v1.37.0 │ 07 Dec 25 22:38 UTC │ 07 Dec 25 22:38 UTC │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/07 22:38:12
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1207 22:38:12.603145  448076 out.go:360] Setting OutFile to fd 1 ...
	I1207 22:38:12.603268  448076 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:38:12.603276  448076 out.go:374] Setting ErrFile to fd 2...
	I1207 22:38:12.603281  448076 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:38:12.603518  448076 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-393577/.minikube/bin
	I1207 22:38:12.604030  448076 out.go:368] Setting JSON to false
	I1207 22:38:12.605523  448076 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":4836,"bootTime":1765142257,"procs":260,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 22:38:12.605620  448076 start.go:143] virtualization: kvm guest
	I1207 22:38:12.607813  448076 out.go:179] * [functional-304107] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1207 22:38:12.609154  448076 out.go:179]   - MINIKUBE_LOCATION=22054
	I1207 22:38:12.609137  448076 notify.go:221] Checking for updates...
	I1207 22:38:12.610372  448076 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 22:38:12.611730  448076 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-393577/kubeconfig
	I1207 22:38:12.613006  448076 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-393577/.minikube
	I1207 22:38:12.614553  448076 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1207 22:38:12.615836  448076 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 22:38:12.617450  448076 config.go:182] Loaded profile config "functional-304107": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1207 22:38:12.618102  448076 driver.go:422] Setting default libvirt URI to qemu:///system
	I1207 22:38:12.646801  448076 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1207 22:38:12.646917  448076 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 22:38:12.711095  448076 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-07 22:38:12.699458963 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 22:38:12.711265  448076 docker.go:319] overlay module found
	I1207 22:38:12.713410  448076 out.go:179] * Using the docker driver based on existing profile
	I1207 22:38:12.714638  448076 start.go:309] selected driver: docker
	I1207 22:38:12.714655  448076 start.go:927] validating driver "docker" against &{Name:functional-304107 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-304107 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 22:38:12.714784  448076 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 22:38:12.714913  448076 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 22:38:12.783048  448076 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-07 22:38:12.7697066 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 22:38:12.784020  448076 cni.go:84] Creating CNI manager for ""
	I1207 22:38:12.784118  448076 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1207 22:38:12.784214  448076 start.go:353] cluster config:
	{Name:functional-304107 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-304107 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 22:38:12.786367  448076 out.go:179] * dry-run validation complete!
	
	
	==> Docker <==
	Dec 07 22:38:19 functional-304107 dockerd[7426]: time="2025-12-07T22:38:19.322999997Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Dec 07 22:38:19 functional-304107 dockerd[7426]: time="2025-12-07T22:38:19.814535375Z" level=info msg="ignoring event" container=c3712c8864cda85cca1c2c040e753ee71cd6cea918c94a1c4b1b1763ab3f86ac module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 07 22:38:20 functional-304107 dockerd[7426]: time="2025-12-07T22:38:20.077445751Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 07 22:38:20 functional-304107 cri-dockerd[7726]: time="2025-12-07T22:38:20Z" level=info msg="Stop pulling image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: Pulling from kubernetesui/dashboard"
	Dec 07 22:38:20 functional-304107 dockerd[7426]: time="2025-12-07T22:38:20.319691632Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Dec 07 22:38:20 functional-304107 dockerd[7426]: time="2025-12-07T22:38:20.800569904Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 07 22:38:28 functional-304107 dockerd[7426]: 2025/12/07 22:38:28 http2: server: error reading preface from client @: read unix /var/run/docker.sock->@: read: connection reset by peer
	Dec 07 22:38:30 functional-304107 dockerd[7426]: time="2025-12-07T22:38:30.401741382Z" level=info msg="sbJoin: gwep4 ''->'2e3c7f304abe', gwep6 ''->''"
	Dec 07 22:38:33 functional-304107 dockerd[7426]: time="2025-12-07T22:38:33.013919707Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Dec 07 22:38:33 functional-304107 dockerd[7426]: time="2025-12-07T22:38:33.490633178Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 07 22:38:36 functional-304107 dockerd[7426]: time="2025-12-07T22:38:36.012893226Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Dec 07 22:38:36 functional-304107 dockerd[7426]: time="2025-12-07T22:38:36.491911678Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 07 22:39:02 functional-304107 dockerd[7426]: time="2025-12-07T22:39:02.011735339Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Dec 07 22:39:02 functional-304107 dockerd[7426]: time="2025-12-07T22:39:02.496904823Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 07 22:39:02 functional-304107 dockerd[7426]: time="2025-12-07T22:39:02.744128567Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Dec 07 22:39:03 functional-304107 dockerd[7426]: time="2025-12-07T22:39:03.223445801Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 07 22:39:50 functional-304107 dockerd[7426]: time="2025-12-07T22:39:50.011704069Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Dec 07 22:39:50 functional-304107 dockerd[7426]: time="2025-12-07T22:39:50.488213758Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 07 22:39:53 functional-304107 dockerd[7426]: time="2025-12-07T22:39:53.009618685Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Dec 07 22:39:53 functional-304107 dockerd[7426]: time="2025-12-07T22:39:53.492482277Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 07 22:41:14 functional-304107 dockerd[7426]: time="2025-12-07T22:41:14.016685028Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Dec 07 22:41:14 functional-304107 dockerd[7426]: time="2025-12-07T22:41:14.497253088Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 07 22:41:25 functional-304107 dockerd[7426]: time="2025-12-07T22:41:25.011550726Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Dec 07 22:41:25 functional-304107 dockerd[7426]: time="2025-12-07T22:41:25.768912175Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 07 22:41:25 functional-304107 cri-dockerd[7726]: time="2025-12-07T22:41:25Z" level=info msg="Stop pulling image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: Pulling from kubernetesui/dashboard"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	0c14fafe1a9f1       nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42                         4 minutes ago       Running             myfrontend                0                   1cd7ae278a95d       sp-pod                                      default
	bb849dc65ed8c       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   4 minutes ago       Exited              mount-munger              0                   c3712c8864cda       busybox-mount                               default
	f745833c35485       mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb                         4 minutes ago       Running             mysql                     0                   8e67c5ed948ca       mysql-5bb876957f-4jlkm                      default
	4da267e87e100       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6           5 minutes ago       Running             echo-server               0                   513192ac0d927       hello-node-75c85bcc94-lsdfr                 default
	02a98c1614cdd       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6           5 minutes ago       Running             echo-server               0                   7c671e3ae4909       hello-node-connect-7d85dfc575-bw6s8         default
	f1647ace06bf4       nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14                         5 minutes ago       Running             nginx                     0                   5bbe6db70914c       nginx-svc                                   default
	6f84c16817923       52546a367cc9e                                                                                         5 minutes ago       Running             coredns                   2                   3b7754c917be7       coredns-66bc5c9577-4qp99                    kube-system
	822f5ff4ed500       6e38f40d628db                                                                                         5 minutes ago       Running             storage-provisioner       3                   4c45eea6d7266       storage-provisioner                         kube-system
	02e2d85da0b9f       8aa150647e88a                                                                                         5 minutes ago       Running             kube-proxy                2                   12d75babeeb60       kube-proxy-pd5wd                            kube-system
	ffaa90d8d1d60       a3e246e9556e9                                                                                         5 minutes ago       Running             etcd                      2                   2d5d029e04d9a       etcd-functional-304107                      kube-system
	6b3c8ac7211b2       01e8bacf0f500                                                                                         5 minutes ago       Running             kube-controller-manager   2                   f7bdd00fb369c       kube-controller-manager-functional-304107   kube-system
	41bbee6a06fdf       88320b5498ff2                                                                                         5 minutes ago       Running             kube-scheduler            2                   cbf972909b91d       kube-scheduler-functional-304107            kube-system
	dba7457ece939       a5f569d49a979                                                                                         5 minutes ago       Running             kube-apiserver            0                   a60ba30d5690b       kube-apiserver-functional-304107            kube-system
	5e90eed3fbdde       6e38f40d628db                                                                                         6 minutes ago       Exited              storage-provisioner       2                   dad57689661d5       storage-provisioner                         kube-system
	f18e018fee324       52546a367cc9e                                                                                         6 minutes ago       Exited              coredns                   1                   7af2649c52925       coredns-66bc5c9577-4qp99                    kube-system
	e48c781da85a4       8aa150647e88a                                                                                         6 minutes ago       Exited              kube-proxy                1                   7a6e7aad25963       kube-proxy-pd5wd                            kube-system
	77968cab8a677       a3e246e9556e9                                                                                         6 minutes ago       Exited              etcd                      1                   15f7444f6a22b       etcd-functional-304107                      kube-system
	db300a51b23f0       01e8bacf0f500                                                                                         6 minutes ago       Exited              kube-controller-manager   1                   3d83f012e661e       kube-controller-manager-functional-304107   kube-system
	41fa6477afc27       88320b5498ff2                                                                                         6 minutes ago       Exited              kube-scheduler            1                   0b4dd64d6231e       kube-scheduler-functional-304107            kube-system
	
	
	==> coredns [6f84c1681792] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49309 - 23167 "HINFO IN 5352894005535060145.8675857745092316878. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.027266123s
	
	
	==> coredns [f18e018fee32] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:44003 - 30256 "HINFO IN 4101114324048550541.6981339967762851229. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.021879745s
	
	
	==> describe nodes <==
	Name:               functional-304107
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-304107
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5cff42f65f8043a145b28acc2164a21aaf35c47
	                    minikube.k8s.io/name=functional-304107
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_07T22_35_34_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 07 Dec 2025 22:35:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-304107
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 07 Dec 2025 22:43:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 07 Dec 2025 22:38:58 +0000   Sun, 07 Dec 2025 22:35:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 07 Dec 2025 22:38:58 +0000   Sun, 07 Dec 2025 22:35:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 07 Dec 2025 22:38:58 +0000   Sun, 07 Dec 2025 22:35:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 07 Dec 2025 22:38:58 +0000   Sun, 07 Dec 2025 22:35:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-304107
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                3a53179d-6e12-4880-a549-d2e469b40494
	  Boot ID:                    10618540-d4ef-4c75-8cf1-8b1c0379fe5e
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://29.1.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-lsdfr                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m17s
	  default                     hello-node-connect-7d85dfc575-bw6s8           0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m22s
	  default                     mysql-5bb876957f-4jlkm                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     5m9s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m24s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m5s
	  kube-system                 coredns-66bc5c9577-4qp99                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     7m33s
	  kube-system                 etcd-functional-304107                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         7m39s
	  kube-system                 kube-apiserver-functional-304107              250m (3%)     0 (0%)      0 (0%)           0 (0%)         5m45s
	  kube-system                 kube-controller-manager-functional-304107     200m (2%)     0 (0%)      0 (0%)           0 (0%)         7m39s
	  kube-system                 kube-proxy-pd5wd                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m34s
	  kube-system                 kube-scheduler-functional-304107              100m (1%)     0 (0%)      0 (0%)           0 (0%)         7m40s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m33s
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-fll54    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-rgc2w         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (16%)  700m (8%)
	  memory             682Mi (2%)   870Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 7m33s                  kube-proxy       
	  Normal   Starting                 5m45s                  kube-proxy       
	  Normal   Starting                 6m32s                  kube-proxy       
	  Normal   Starting                 7m39s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  7m39s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  7m39s                  kubelet          Node functional-304107 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m39s                  kubelet          Node functional-304107 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m39s                  kubelet          Node functional-304107 status is now: NodeHasSufficientPID
	  Normal   NodeReady                7m36s                  kubelet          Node functional-304107 status is now: NodeReady
	  Normal   RegisteredNode           7m34s                  node-controller  Node functional-304107 event: Registered Node functional-304107 in Controller
	  Warning  ContainerGCFailed        6m39s                  kubelet          rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	  Normal   RegisteredNode           6m30s                  node-controller  Node functional-304107 event: Registered Node functional-304107 in Controller
	  Normal   Starting                 5m49s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  5m49s (x8 over 5m49s)  kubelet          Node functional-304107 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m49s (x8 over 5m49s)  kubelet          Node functional-304107 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m49s (x7 over 5m49s)  kubelet          Node functional-304107 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  5m49s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           5m43s                  node-controller  Node functional-304107 event: Registered Node functional-304107 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff d2 bd e8 a2 e9 38 08 06
	[  +4.371009] IPv4: martian source 10.244.0.1 from 10.244.0.26, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 76 26 7f 89 eb 37 08 06
	[Dec 7 22:32] IPv4: martian source 10.244.0.1 from 10.244.0.27, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 90 44 62 17 5d 08 06
	[  +0.000614] IPv4: martian source 10.244.0.27 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ba af 6f b2 4f 4e 08 06
	[Dec 7 22:33] IPv4: martian source 10.244.0.1 from 10.244.0.31, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 9a cf 5d 26 73 e5 08 06
	[  +0.000688] IPv4: martian source 10.244.0.31 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff ba af 6f b2 4f 4e 08 06
	[  +0.000675] IPv4: martian source 10.244.0.31 from 10.244.0.5, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff a2 23 e0 4c bb d1 08 06
	[ +14.855650] IPv4: martian source 10.244.0.33 from 10.244.0.26, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 76 26 7f 89 eb 37 08 06
	[  +1.290739] IPv4: martian source 10.244.0.26 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ba af 6f b2 4f 4e 08 06
	[Dec 7 22:35] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 66 ed 23 6d c5 f1 08 06
	[  +0.101054] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 26 62 04 91 35 35 08 06
	[Dec 7 22:36] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff fe a9 7b 3e 23 12 08 06
	[Dec 7 22:37] IPv4: martian source 10.244.0.1 from 10.244.0.6, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ea 06 21 81 6b 9f 08 06
	
	
	==> etcd [77968cab8a67] <==
	{"level":"warn","ts":"2025-12-07T22:36:39.291627Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:36:39.298284Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:36:39.307810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:36:39.314523Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:36:39.321554Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:36:39.328176Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:36:39.336680Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:36:39.344097Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:36:39.350584Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:36:39.358536Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:36:39.368743Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:36:39.376592Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:36:39.384278Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:36:39.399357Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:36:39.413357Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:36:39.420818Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:36:39.428832Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:36:39.436820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:36:39.443454Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:36:39.450843Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:36:39.458558Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:36:39.472126Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:36:39.479937Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:36:39.486560Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:36:39.531211Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35432","server-name":"","error":"EOF"}
	
	
	==> etcd [ffaa90d8d1d6] <==
	{"level":"warn","ts":"2025-12-07T22:37:26.355127Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:37:26.361901Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:37:26.374406Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:37:26.381098Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:37:26.387526Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:37:26.393930Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:37:26.400552Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:37:26.413634Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:37:26.420320Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:37:26.426762Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:37:26.446206Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:37:26.452747Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:37:26.459318Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:37:26.465919Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:37:26.473623Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:37:26.480834Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:37:26.488109Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:37:26.494566Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:37:26.500996Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:37:26.508990Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:37:26.515518Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:37:26.528176Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:37:26.534630Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:37:26.541094Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:37:26.591786Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47974","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 22:43:14 up  1:25,  0 user,  load average: 0.36, 0.71, 1.43
	Linux functional-304107 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [dba7457ece93] <==
	E1207 22:37:27.036884       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1207 22:37:27.060084       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1207 22:37:27.866960       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1207 22:37:27.934482       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1207 22:37:28.446349       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1207 22:37:28.476950       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1207 22:37:28.496908       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1207 22:37:28.501915       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1207 22:37:30.673361       1 controller.go:667] quota admission added evaluator for: endpoints
	I1207 22:37:30.723071       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1207 22:37:30.774827       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1207 22:37:45.329901       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.102.186.167"}
	I1207 22:37:49.916091       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.110.150.78"}
	I1207 22:37:51.221772       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.103.18.24"}
	I1207 22:37:56.340090       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.99.189.33"}
	I1207 22:38:04.706261       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.109.11.211"}
	E1207 22:38:07.360839       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:38510: use of closed network connection
	I1207 22:38:13.777117       1 controller.go:667] quota admission added evaluator for: namespaces
	I1207 22:38:13.894232       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.232.143"}
	I1207 22:38:13.910016       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.210.158"}
	E1207 22:38:21.923762       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:56822: use of closed network connection
	E1207 22:38:23.528198       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:56838: use of closed network connection
	E1207 22:38:25.404548       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:56864: use of closed network connection
	E1207 22:38:26.740772       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:56872: use of closed network connection
	E1207 22:38:28.361721       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:56900: use of closed network connection
	
	
	==> kube-controller-manager [6b3c8ac7211b] <==
	I1207 22:37:30.369228       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1207 22:37:30.369241       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1207 22:37:30.369196       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1207 22:37:30.369686       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1207 22:37:30.370622       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1207 22:37:30.370659       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1207 22:37:30.370733       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1207 22:37:30.370765       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1207 22:37:30.370844       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1207 22:37:30.373761       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1207 22:37:30.375129       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1207 22:37:30.393381       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1207 22:37:30.394427       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1207 22:37:30.394498       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1207 22:37:30.394555       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1207 22:37:30.394566       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1207 22:37:30.394574       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1207 22:37:30.396712       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1207 22:37:30.399001       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	E1207 22:38:13.833053       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1207 22:38:13.835300       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1207 22:38:13.837371       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1207 22:38:13.838381       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1207 22:38:13.841748       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1207 22:38:13.845210       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [db300a51b23f] <==
	I1207 22:36:43.336585       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1207 22:36:43.336617       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1207 22:36:43.338766       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1207 22:36:43.340015       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1207 22:36:43.341094       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1207 22:36:43.383802       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1207 22:36:43.383839       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1207 22:36:43.383919       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1207 22:36:43.383961       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1207 22:36:43.383975       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1207 22:36:43.383964       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1207 22:36:43.384410       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1207 22:36:43.384457       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1207 22:36:43.384541       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1207 22:36:43.387359       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1207 22:36:43.387454       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1207 22:36:43.387567       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-304107"
	I1207 22:36:43.387662       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1207 22:36:43.388653       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1207 22:36:43.388684       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1207 22:36:43.390035       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1207 22:36:43.391379       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1207 22:36:43.393631       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1207 22:36:43.395839       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1207 22:36:43.402128       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [02e2d85da0b9] <==
	I1207 22:37:28.370113       1 server_linux.go:53] "Using iptables proxy"
	I1207 22:37:28.449841       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1207 22:37:28.550168       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1207 22:37:28.550208       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1207 22:37:28.550345       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1207 22:37:28.572003       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1207 22:37:28.572073       1 server_linux.go:132] "Using iptables Proxier"
	I1207 22:37:28.577825       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1207 22:37:28.578255       1 server.go:527] "Version info" version="v1.34.2"
	I1207 22:37:28.578284       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 22:37:28.579584       1 config.go:200] "Starting service config controller"
	I1207 22:37:28.579622       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1207 22:37:28.579664       1 config.go:106] "Starting endpoint slice config controller"
	I1207 22:37:28.579670       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1207 22:37:28.579656       1 config.go:403] "Starting serviceCIDR config controller"
	I1207 22:37:28.579704       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1207 22:37:28.579726       1 config.go:309] "Starting node config controller"
	I1207 22:37:28.579744       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1207 22:37:28.579752       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1207 22:37:28.679842       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1207 22:37:28.679852       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1207 22:37:28.679852       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [e48c781da85a] <==
	I1207 22:36:38.113299       1 server_linux.go:53] "Using iptables proxy"
	I1207 22:36:38.179965       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1207 22:36:39.982337       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes \"functional-304107\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1207 22:36:41.480124       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1207 22:36:41.480169       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1207 22:36:41.480273       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1207 22:36:41.504977       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1207 22:36:41.505038       1 server_linux.go:132] "Using iptables Proxier"
	I1207 22:36:41.510792       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1207 22:36:41.511101       1 server.go:527] "Version info" version="v1.34.2"
	I1207 22:36:41.511117       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 22:36:41.512614       1 config.go:403] "Starting serviceCIDR config controller"
	I1207 22:36:41.512640       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1207 22:36:41.512650       1 config.go:200] "Starting service config controller"
	I1207 22:36:41.512665       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1207 22:36:41.512665       1 config.go:106] "Starting endpoint slice config controller"
	I1207 22:36:41.512682       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1207 22:36:41.512683       1 config.go:309] "Starting node config controller"
	I1207 22:36:41.512757       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1207 22:36:41.512769       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1207 22:36:41.612864       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1207 22:36:41.612952       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1207 22:36:41.612982       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [41bbee6a06fd] <==
	I1207 22:37:25.847375       1 serving.go:386] Generated self-signed cert in-memory
	I1207 22:37:26.976543       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1207 22:37:26.976567       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 22:37:26.980525       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1207 22:37:26.980528       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1207 22:37:26.980553       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1207 22:37:26.980563       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1207 22:37:26.980609       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1207 22:37:26.980552       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1207 22:37:26.980881       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1207 22:37:26.980967       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1207 22:37:27.080905       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1207 22:37:27.080928       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1207 22:37:27.080971       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kube-scheduler [41fa6477afc2] <==
	I1207 22:36:38.776977       1 serving.go:386] Generated self-signed cert in-memory
	W1207 22:36:39.951455       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1207 22:36:39.951521       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1207 22:36:39.951534       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1207 22:36:39.951544       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1207 22:36:39.984100       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1207 22:36:39.984132       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 22:36:39.989511       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1207 22:36:39.989570       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1207 22:36:39.990020       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1207 22:36:39.990114       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1207 22:36:40.089686       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 07 22:41:14 functional-304107 kubelet[8779]: E1207 22:41:14.499945    8779 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Dec 07 22:41:14 functional-304107 kubelet[8779]: E1207 22:41:14.500006    8779 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Dec 07 22:41:14 functional-304107 kubelet[8779]: E1207 22:41:14.500125    8779 kuberuntime_manager.go:1449] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-77bf4d6c4c-fll54_kubernetes-dashboard(d923ff83-4020-47ed-99c2-20a55f686fae): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Dec 07 22:41:14 functional-304107 kubelet[8779]: E1207 22:41:14.500177    8779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-fll54" podUID="d923ff83-4020-47ed-99c2-20a55f686fae"
	Dec 07 22:41:25 functional-304107 kubelet[8779]: E1207 22:41:25.771307    8779 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Dec 07 22:41:25 functional-304107 kubelet[8779]: E1207 22:41:25.771364    8779 kuberuntime_image.go:43] "Failed to pull image" err="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Dec 07 22:41:25 functional-304107 kubelet[8779]: E1207 22:41:25.771463    8779 kuberuntime_manager.go:1449] "Unhandled Error" err="container kubernetes-dashboard start failed in pod kubernetes-dashboard-855c9754f9-rgc2w_kubernetes-dashboard(834d5a75-d152-4514-84bb-12983bbb23bc): ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Dec 07 22:41:25 functional-304107 kubelet[8779]: E1207 22:41:25.771504    8779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ErrImagePull: \"toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rgc2w" podUID="834d5a75-d152-4514-84bb-12983bbb23bc"
	Dec 07 22:41:26 functional-304107 kubelet[8779]: E1207 22:41:26.772444    8779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-fll54" podUID="d923ff83-4020-47ed-99c2-20a55f686fae"
	Dec 07 22:41:37 functional-304107 kubelet[8779]: E1207 22:41:37.772720    8779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rgc2w" podUID="834d5a75-d152-4514-84bb-12983bbb23bc"
	Dec 07 22:41:41 functional-304107 kubelet[8779]: E1207 22:41:41.771991    8779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-fll54" podUID="d923ff83-4020-47ed-99c2-20a55f686fae"
	Dec 07 22:41:48 functional-304107 kubelet[8779]: E1207 22:41:48.772268    8779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rgc2w" podUID="834d5a75-d152-4514-84bb-12983bbb23bc"
	Dec 07 22:41:54 functional-304107 kubelet[8779]: E1207 22:41:54.772781    8779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-fll54" podUID="d923ff83-4020-47ed-99c2-20a55f686fae"
	Dec 07 22:42:01 functional-304107 kubelet[8779]: E1207 22:42:01.772815    8779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rgc2w" podUID="834d5a75-d152-4514-84bb-12983bbb23bc"
	Dec 07 22:42:06 functional-304107 kubelet[8779]: E1207 22:42:06.781026    8779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-fll54" podUID="d923ff83-4020-47ed-99c2-20a55f686fae"
	Dec 07 22:42:15 functional-304107 kubelet[8779]: E1207 22:42:15.773271    8779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rgc2w" podUID="834d5a75-d152-4514-84bb-12983bbb23bc"
	Dec 07 22:42:20 functional-304107 kubelet[8779]: E1207 22:42:20.772321    8779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-fll54" podUID="d923ff83-4020-47ed-99c2-20a55f686fae"
	Dec 07 22:42:28 functional-304107 kubelet[8779]: E1207 22:42:28.772483    8779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rgc2w" podUID="834d5a75-d152-4514-84bb-12983bbb23bc"
	Dec 07 22:42:31 functional-304107 kubelet[8779]: E1207 22:42:31.772050    8779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-fll54" podUID="d923ff83-4020-47ed-99c2-20a55f686fae"
	Dec 07 22:42:39 functional-304107 kubelet[8779]: E1207 22:42:39.772229    8779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rgc2w" podUID="834d5a75-d152-4514-84bb-12983bbb23bc"
	Dec 07 22:42:45 functional-304107 kubelet[8779]: E1207 22:42:45.772822    8779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-fll54" podUID="d923ff83-4020-47ed-99c2-20a55f686fae"
	Dec 07 22:42:51 functional-304107 kubelet[8779]: E1207 22:42:51.772153    8779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rgc2w" podUID="834d5a75-d152-4514-84bb-12983bbb23bc"
	Dec 07 22:42:56 functional-304107 kubelet[8779]: E1207 22:42:56.772335    8779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-fll54" podUID="d923ff83-4020-47ed-99c2-20a55f686fae"
	Dec 07 22:43:02 functional-304107 kubelet[8779]: E1207 22:43:02.772997    8779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rgc2w" podUID="834d5a75-d152-4514-84bb-12983bbb23bc"
	Dec 07 22:43:09 functional-304107 kubelet[8779]: E1207 22:43:09.771828    8779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-fll54" podUID="d923ff83-4020-47ed-99c2-20a55f686fae"
	
	
	==> storage-provisioner [5e90eed3fbdd] <==
	I1207 22:36:54.278472       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1207 22:36:54.285665       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1207 22:36:54.285718       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1207 22:36:54.287805       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:36:57.742805       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:37:02.003199       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:37:05.601824       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:37:08.656336       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:37:11.678748       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:37:11.684092       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1207 22:37:11.684262       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1207 22:37:11.684436       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2e5a30b4-e0f4-4260-983e-9c1d65d52b48", APIVersion:"v1", ResourceVersion:"551", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-304107_6e671eb7-998e-42e1-9718-e975434a6aa0 became leader
	I1207 22:37:11.684457       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-304107_6e671eb7-998e-42e1-9718-e975434a6aa0!
	W1207 22:37:11.686526       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:37:11.690802       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1207 22:37:11.784680       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-304107_6e671eb7-998e-42e1-9718-e975434a6aa0!
	W1207 22:37:13.693702       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:37:13.697861       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:37:15.701701       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:37:15.706362       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:37:17.709053       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:37:17.712903       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [822f5ff4ed50] <==
	W1207 22:42:48.923819       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:42:50.927756       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:42:50.931700       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:42:52.935070       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:42:52.939572       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:42:54.943025       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:42:54.948368       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:42:56.951317       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:42:56.955581       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:42:58.959187       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:42:58.963293       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:43:00.966835       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:43:00.971096       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:43:02.974362       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:43:02.978571       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:43:04.982130       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:43:04.987237       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:43:06.990672       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:43:06.994483       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:43:08.998649       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:43:09.002757       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:43:11.006389       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:43:11.010411       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:43:13.013877       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:43:13.019589       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-304107 -n functional-304107
helpers_test.go:269: (dbg) Run:  kubectl --context functional-304107 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount dashboard-metrics-scraper-77bf4d6c4c-fll54 kubernetes-dashboard-855c9754f9-rgc2w
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-304107 describe pod busybox-mount dashboard-metrics-scraper-77bf4d6c4c-fll54 kubernetes-dashboard-855c9754f9-rgc2w
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-304107 describe pod busybox-mount dashboard-metrics-scraper-77bf4d6c4c-fll54 kubernetes-dashboard-855c9754f9-rgc2w: exit status 1 (69.36626ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-304107/192.168.49.2
	Start Time:       Sun, 07 Dec 2025 22:38:05 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.13
	IPs:
	  IP:  10.244.0.13
	Containers:
	  mount-munger:
	    Container ID:  docker://bb849dc65ed8ca6261155a53ae4b076ab3b743bdcbc0deadff7660638b8f5e67
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sun, 07 Dec 2025 22:38:18 +0000
	      Finished:     Sun, 07 Dec 2025 22:38:18 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sjjqx (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-sjjqx:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  5m9s   default-scheduler  Successfully assigned default/busybox-mount to functional-304107
	  Normal  Pulling    5m9s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     4m56s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.979s (12.649s including waiting). Image size: 4403845 bytes.
	  Normal  Created    4m56s  kubelet            Created container: mount-munger
	  Normal  Started    4m56s  kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-fll54" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-rgc2w" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-304107 describe pod busybox-mount dashboard-metrics-scraper-77bf4d6c4c-fll54 kubernetes-dashboard-855c9754f9-rgc2w: exit status 1
--- FAIL: TestFunctional/parallel/DashboardCmd (302.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (3.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-442811 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-442811 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-442811 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-442811 --alsologtostderr -v=1] stderr:
I1207 22:52:15.572558  481736 out.go:360] Setting OutFile to fd 1 ...
I1207 22:52:15.572863  481736 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1207 22:52:15.572874  481736 out.go:374] Setting ErrFile to fd 2...
I1207 22:52:15.572878  481736 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1207 22:52:15.573081  481736 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-393577/.minikube/bin
I1207 22:52:15.573331  481736 mustload.go:66] Loading cluster: functional-442811
I1207 22:52:15.573679  481736 config.go:182] Loaded profile config "functional-442811": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1207 22:52:15.574051  481736 cli_runner.go:164] Run: docker container inspect functional-442811 --format={{.State.Status}}
I1207 22:52:15.592795  481736 host.go:66] Checking if "functional-442811" exists ...
I1207 22:52:15.593060  481736 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1207 22:52:15.647946  481736 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:54 SystemTime:2025-12-07 22:52:15.638369849 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1207 22:52:15.648111  481736 api_server.go:166] Checking apiserver status ...
I1207 22:52:15.648163  481736 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1207 22:52:15.648203  481736 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-442811
I1207 22:52:15.667528  481736 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33167 SSHKeyPath:/home/jenkins/minikube-integration/22054-393577/.minikube/machines/functional-442811/id_rsa Username:docker}
I1207 22:52:15.766864  481736 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/8816/cgroup
W1207 22:52:15.775665  481736 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/8816/cgroup: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1207 22:52:15.775716  481736 ssh_runner.go:195] Run: ls
I1207 22:52:15.779485  481736 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
I1207 22:52:15.783726  481736 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
ok
W1207 22:52:15.783774  481736 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I1207 22:52:15.783934  481736 config.go:182] Loaded profile config "functional-442811": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1207 22:52:15.783956  481736 addons.go:70] Setting dashboard=true in profile "functional-442811"
I1207 22:52:15.783967  481736 addons.go:239] Setting addon dashboard=true in "functional-442811"
I1207 22:52:15.783997  481736 host.go:66] Checking if "functional-442811" exists ...
I1207 22:52:15.784381  481736 cli_runner.go:164] Run: docker container inspect functional-442811 --format={{.State.Status}}
I1207 22:52:15.804536  481736 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I1207 22:52:15.805943  481736 out.go:179]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I1207 22:52:15.807217  481736 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
I1207 22:52:15.807243  481736 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I1207 22:52:15.807317  481736 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-442811
I1207 22:52:15.826206  481736 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33167 SSHKeyPath:/home/jenkins/minikube-integration/22054-393577/.minikube/machines/functional-442811/id_rsa Username:docker}
I1207 22:52:15.927202  481736 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I1207 22:52:15.927228  481736 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I1207 22:52:15.940670  481736 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I1207 22:52:15.940697  481736 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I1207 22:52:15.954036  481736 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I1207 22:52:15.954065  481736 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I1207 22:52:15.966896  481736 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
I1207 22:52:15.966917  481736 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I1207 22:52:15.980344  481736 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
I1207 22:52:15.980372  481736 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I1207 22:52:15.993847  481736 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I1207 22:52:15.993872  481736 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I1207 22:52:16.007046  481736 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
I1207 22:52:16.007073  481736 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I1207 22:52:16.021298  481736 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
I1207 22:52:16.021326  481736 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I1207 22:52:16.034536  481736 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
I1207 22:52:16.034559  481736 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I1207 22:52:16.047964  481736 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1207 22:52:16.484755  481736 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-442811 addons enable metrics-server

                                                
                                                
I1207 22:52:16.485957  481736 addons.go:202] Writing out "functional-442811" config to set dashboard=true...
W1207 22:52:16.486246  481736 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I1207 22:52:16.487138  481736 kapi.go:59] client config for functional-442811: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22054-393577/.minikube/profiles/functional-442811/client.crt", KeyFile:"/home/jenkins/minikube-integration/22054-393577/.minikube/profiles/functional-442811/client.key", CAFile:"/home/jenkins/minikube-integration/22054-393577/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28156e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1207 22:52:16.487664  481736 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1207 22:52:16.487680  481736 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1207 22:52:16.487685  481736 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1207 22:52:16.487689  481736 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1207 22:52:16.487692  481736 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1207 22:52:16.495837  481736 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  be2f7b3e-e2c2-417d-9f25-f2c2ee97d3d5 1421 0 2025-12-07 22:52:16 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-12-07 22:52:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.111.17.54,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.111.17.54],IPFamilies:[IPv4],AllocateLoadBalancerN
odePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W1207 22:52:16.495974  481736 out.go:285] * Launching proxy ...
* Launching proxy ...
I1207 22:52:16.496029  481736 dashboard.go:154] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-442811 proxy --port 36195]
I1207 22:52:16.496315  481736 dashboard.go:159] Waiting for kubectl to output host:port ...
I1207 22:52:16.544092  481736 out.go:203] 
W1207 22:52:16.545384  481736 out.go:285] X Exiting due to HOST_KUBECTL_PROXY: kubectl proxy: readByteWithTimeout: EOF
X Exiting due to HOST_KUBECTL_PROXY: kubectl proxy: readByteWithTimeout: EOF
W1207 22:52:16.545404  481736 out.go:285] * 
* 
W1207 22:52:16.549719  481736 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_dashboard_2f9e80c8c4dc47927ad6915561a20c5705c3b3b4_0.log               │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_dashboard_2f9e80c8c4dc47927ad6915561a20c5705c3b3b4_0.log               │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1207 22:52:16.551093  481736 out.go:203] 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-442811
helpers_test.go:243: (dbg) docker inspect functional-442811:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1699178f8a7c3755a5daa3ce9dad5f01448b5919784eccd2fc40c26ff389c762",
	        "Created": "2025-12-07T22:43:22.049081307Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 456864,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-07T22:43:22.089713066Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
	        "ResolvConfPath": "/var/lib/docker/containers/1699178f8a7c3755a5daa3ce9dad5f01448b5919784eccd2fc40c26ff389c762/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1699178f8a7c3755a5daa3ce9dad5f01448b5919784eccd2fc40c26ff389c762/hostname",
	        "HostsPath": "/var/lib/docker/containers/1699178f8a7c3755a5daa3ce9dad5f01448b5919784eccd2fc40c26ff389c762/hosts",
	        "LogPath": "/var/lib/docker/containers/1699178f8a7c3755a5daa3ce9dad5f01448b5919784eccd2fc40c26ff389c762/1699178f8a7c3755a5daa3ce9dad5f01448b5919784eccd2fc40c26ff389c762-json.log",
	        "Name": "/functional-442811",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-442811:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-442811",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1699178f8a7c3755a5daa3ce9dad5f01448b5919784eccd2fc40c26ff389c762",
	                "LowerDir": "/var/lib/docker/overlay2/5d7c45936f6119213b3285a7f6a06509ba6a63e767162da1ee6f414e72615470-init/diff:/var/lib/docker/overlay2/72e2c0d34d3438044c6ca8754190358557351efc0aeb527bd1060ce52e748152/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5d7c45936f6119213b3285a7f6a06509ba6a63e767162da1ee6f414e72615470/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5d7c45936f6119213b3285a7f6a06509ba6a63e767162da1ee6f414e72615470/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5d7c45936f6119213b3285a7f6a06509ba6a63e767162da1ee6f414e72615470/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-442811",
	                "Source": "/var/lib/docker/volumes/functional-442811/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-442811",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-442811",
	                "name.minikube.sigs.k8s.io": "functional-442811",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "31fe22828acd889527928fad9cf9de4644c6693bf1715496a16bc2b07706d2c3",
	            "SandboxKey": "/var/run/docker/netns/31fe22828acd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33167"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33168"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33171"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33169"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33170"
	                    }
	                ]
	            },
	            "Networks": {
	                "functional-442811": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0b711061d8c4c0d449da65ff00c005cf89c83f72d15bf795a0f752ebfb4033e6",
	                    "EndpointID": "056c3331b34f02b412809772934c779a89252a7366e762ac002e00a13fc17922",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "96:63:b1:92:b7:e8",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-442811",
	                        "1699178f8a7c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-442811 -n functional-442811
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-442811 logs -n 25: (1.010608771s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount     │ -p functional-442811 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1395108936/001:/mount-9p --alsologtostderr -v=1              │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │                     │
	│ ssh       │ functional-442811 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │                     │
	│ ssh       │ functional-442811 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │ 07 Dec 25 22:52 UTC │
	│ ssh       │ functional-442811 ssh -- ls -la /mount-9p                                                                                                           │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │ 07 Dec 25 22:52 UTC │
	│ ssh       │ functional-442811 ssh cat /mount-9p/test-1765147922617282077                                                                                        │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │ 07 Dec 25 22:52 UTC │
	│ ssh       │ functional-442811 ssh stat /mount-9p/created-by-test                                                                                                │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │ 07 Dec 25 22:52 UTC │
	│ ssh       │ functional-442811 ssh stat /mount-9p/created-by-pod                                                                                                 │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │ 07 Dec 25 22:52 UTC │
	│ ssh       │ functional-442811 ssh sudo umount -f /mount-9p                                                                                                      │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │ 07 Dec 25 22:52 UTC │
	│ ssh       │ functional-442811 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │                     │
	│ mount     │ -p functional-442811 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2396619733/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │                     │
	│ ssh       │ functional-442811 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │ 07 Dec 25 22:52 UTC │
	│ ssh       │ functional-442811 ssh -- ls -la /mount-9p                                                                                                           │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │ 07 Dec 25 22:52 UTC │
	│ ssh       │ functional-442811 ssh sudo umount -f /mount-9p                                                                                                      │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │                     │
	│ mount     │ -p functional-442811 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo68706730/001:/mount3 --alsologtostderr -v=1                  │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │                     │
	│ mount     │ -p functional-442811 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo68706730/001:/mount2 --alsologtostderr -v=1                  │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │                     │
	│ ssh       │ functional-442811 ssh findmnt -T /mount1                                                                                                            │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │                     │
	│ mount     │ -p functional-442811 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo68706730/001:/mount1 --alsologtostderr -v=1                  │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │                     │
	│ ssh       │ functional-442811 ssh findmnt -T /mount1                                                                                                            │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │ 07 Dec 25 22:52 UTC │
	│ ssh       │ functional-442811 ssh findmnt -T /mount2                                                                                                            │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │ 07 Dec 25 22:52 UTC │
	│ ssh       │ functional-442811 ssh findmnt -T /mount3                                                                                                            │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │ 07 Dec 25 22:52 UTC │
	│ mount     │ -p functional-442811 --kill=true                                                                                                                    │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │                     │
	│ start     │ -p functional-442811 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-beta.0     │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │                     │
	│ start     │ -p functional-442811 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-beta.0     │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │                     │
	│ start     │ -p functional-442811 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-beta.0               │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-442811 --alsologtostderr -v=1                                                                                      │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │                     │
	└───────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/07 22:52:15
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1207 22:52:15.351962  481599 out.go:360] Setting OutFile to fd 1 ...
	I1207 22:52:15.352191  481599 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:52:15.352201  481599 out.go:374] Setting ErrFile to fd 2...
	I1207 22:52:15.352205  481599 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:52:15.352433  481599 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-393577/.minikube/bin
	I1207 22:52:15.352871  481599 out.go:368] Setting JSON to false
	I1207 22:52:15.353910  481599 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":5678,"bootTime":1765142257,"procs":235,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 22:52:15.353965  481599 start.go:143] virtualization: kvm guest
	I1207 22:52:15.355654  481599 out.go:179] * [functional-442811] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1207 22:52:15.357025  481599 out.go:179]   - MINIKUBE_LOCATION=22054
	I1207 22:52:15.357048  481599 notify.go:221] Checking for updates...
	I1207 22:52:15.359227  481599 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 22:52:15.360452  481599 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-393577/kubeconfig
	I1207 22:52:15.361525  481599 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-393577/.minikube
	I1207 22:52:15.362415  481599 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1207 22:52:15.363417  481599 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 22:52:15.364856  481599 config.go:182] Loaded profile config "functional-442811": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1207 22:52:15.365411  481599 driver.go:422] Setting default libvirt URI to qemu:///system
	I1207 22:52:15.388991  481599 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1207 22:52:15.389173  481599 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 22:52:15.446662  481599 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:54 SystemTime:2025-12-07 22:52:15.436555194 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 22:52:15.446764  481599 docker.go:319] overlay module found
	I1207 22:52:15.448725  481599 out.go:179] * Using the docker driver based on existing profile
	I1207 22:52:15.449760  481599 start.go:309] selected driver: docker
	I1207 22:52:15.449777  481599 start.go:927] validating driver "docker" against &{Name:functional-442811 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-442811 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mo
untOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 22:52:15.449897  481599 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 22:52:15.450002  481599 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 22:52:15.506317  481599 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:54 SystemTime:2025-12-07 22:52:15.497105264 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 22:52:15.507092  481599 cni.go:84] Creating CNI manager for ""
	I1207 22:52:15.507183  481599 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1207 22:52:15.507250  481599 start.go:353] cluster config:
	{Name:functional-442811 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-442811 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 22:52:15.509831  481599 out.go:179] * dry-run validation complete!
	
	
	==> Docker <==
	Dec 07 22:47:28 functional-442811 dockerd[7407]: time="2025-12-07T22:47:28.569398442Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 07 22:47:28 functional-442811 cri-dockerd[7726]: time="2025-12-07T22:47:28Z" level=info msg="Stop pulling image docker.io/nginx:alpine: alpine: Pulling from library/nginx"
	Dec 07 22:48:42 functional-442811 dockerd[7407]: time="2025-12-07T22:48:42.272268090Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 07 22:48:48 functional-442811 dockerd[7407]: time="2025-12-07T22:48:48.266472601Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 07 22:48:52 functional-442811 dockerd[7407]: time="2025-12-07T22:48:52.276705275Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 07 22:49:01 functional-442811 dockerd[7407]: time="2025-12-07T22:49:01.262704036Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 07 22:51:25 functional-442811 dockerd[7407]: time="2025-12-07T22:51:25.553673455Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 07 22:51:25 functional-442811 cri-dockerd[7726]: time="2025-12-07T22:51:25Z" level=info msg="Stop pulling image docker.io/mysql:5.7: 5.7: Pulling from library/mysql"
	Dec 07 22:51:34 functional-442811 dockerd[7407]: time="2025-12-07T22:51:34.271464817Z" level=info msg="sbJoin: gwep4 ''->'', gwep6 ''->''" eid=7b2717513967 ep=k8s_POD_hello-node-5758569b79-6bwdx_default_bb6e8c71-aaa3-4a46-9946-a5ac8718a889_0 net=none nid=9dd2fc082379
	Dec 07 22:51:34 functional-442811 cri-dockerd[7726]: time="2025-12-07T22:51:34Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/55db6e6cade28d83a147b979db2343db3cf0b5568d1ee1079a6eb7b6409a4412/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local europe-west2-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Dec 07 22:51:35 functional-442811 dockerd[7407]: time="2025-12-07T22:51:35.348185586Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 07 22:51:38 functional-442811 dockerd[7407]: time="2025-12-07T22:51:38.283338388Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 07 22:51:39 functional-442811 dockerd[7407]: time="2025-12-07T22:51:39.276334180Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 07 22:51:45 functional-442811 dockerd[7407]: time="2025-12-07T22:51:45.263850720Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 07 22:51:49 functional-442811 dockerd[7407]: time="2025-12-07T22:51:49.274460624Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 07 22:52:04 functional-442811 dockerd[7407]: time="2025-12-07T22:52:04.525249186Z" level=info msg="sbJoin: gwep4 ''->'', gwep6 ''->''" eid=3d604275dfc8 ep=k8s_POD_busybox-mount_default_c841ff39-bf01-4031-910c-0e5b10ccf76f_0 net=none nid=9dd2fc082379
	Dec 07 22:52:04 functional-442811 cri-dockerd[7726]: time="2025-12-07T22:52:04Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/cf598e6d0b5c892a95b9e4fedb15ba4c9e6ae0b05cd4a3f707c24736e379c5ac/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local europe-west2-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Dec 07 22:52:06 functional-442811 cri-dockerd[7726]: time="2025-12-07T22:52:06Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	Dec 07 22:52:06 functional-442811 dockerd[7407]: time="2025-12-07T22:52:06.699358402Z" level=info msg="ignoring event" container=e878acdb795753bebae0d3951e3f7b095f3224bc0a9d688b6e8a2b128fd36dac module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 07 22:52:08 functional-442811 dockerd[7407]: time="2025-12-07T22:52:08.434440859Z" level=info msg="ignoring event" container=cf598e6d0b5c892a95b9e4fedb15ba4c9e6ae0b05cd4a3f707c24736e379c5ac module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 07 22:52:16 functional-442811 dockerd[7407]: time="2025-12-07T22:52:16.880634161Z" level=info msg="sbJoin: gwep4 ''->'', gwep6 ''->''" eid=c75514052e5b ep=k8s_POD_dashboard-metrics-scraper-5565989548-6kf2d_kubernetes-dashboard_5a9529c4-8a48-414f-8791-063053747e2e_0 net=none nid=9dd2fc082379
	Dec 07 22:52:16 functional-442811 dockerd[7407]: time="2025-12-07T22:52:16.882474480Z" level=info msg="sbJoin: gwep4 ''->'', gwep6 ''->''" eid=d8560c1bbf24 ep=k8s_POD_kubernetes-dashboard-b84665fb8-chtwb_kubernetes-dashboard_04735f3d-d1e5-4bcc-b82f-7645afbc28fb_0 net=none nid=9dd2fc082379
	Dec 07 22:52:16 functional-442811 cri-dockerd[7726]: time="2025-12-07T22:52:16Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/960cc12c4bcdbeb6578d8a2263b02e25f0c6295a23c72068a10059136687267a/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local europe-west2-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Dec 07 22:52:16 functional-442811 cri-dockerd[7726]: time="2025-12-07T22:52:16Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d00ac3d40cf4d948819cfeadb51a405c06d4330d40d21c25aeb1a802d176b23f/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local europe-west2-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Dec 07 22:52:17 functional-442811 dockerd[7407]: time="2025-12-07T22:52:17.203234705Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	e878acdb79575       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   11 seconds ago      Exited              mount-munger              0                   cf598e6d0b5c8       busybox-mount                               default
	1dbbaf0070f63       aa5e3ebc0dfed                                                                                         6 minutes ago       Running             coredns                   2                   ab34a5f620355       coredns-7d764666f9-j4t8w                    kube-system
	88ac0a9f95196       6e38f40d628db                                                                                         6 minutes ago       Running             storage-provisioner       3                   8edd26c4e4b1f       storage-provisioner                         kube-system
	1241f85066cfe       8a4ded35a3eb1                                                                                         6 minutes ago       Running             kube-proxy                2                   f9a2027ed4081       kube-proxy-d52sm                            kube-system
	83add6b3124c5       a3e246e9556e9                                                                                         6 minutes ago       Running             etcd                      2                   34b1626cbb491       etcd-functional-442811                      kube-system
	aeabb2a736d2e       7bb6219ddab95                                                                                         6 minutes ago       Running             kube-scheduler            2                   582a63f870e0e       kube-scheduler-functional-442811            kube-system
	81b5af16f1153       45f3cc72d235f                                                                                         6 minutes ago       Running             kube-controller-manager   2                   4405a0223b979       kube-controller-manager-functional-442811   kube-system
	c9ff972b6d4dd       aa9d02839d8de                                                                                         6 minutes ago       Running             kube-apiserver            0                   f2eb5859b259b       kube-apiserver-functional-442811            kube-system
	0103126cf1fac       6e38f40d628db                                                                                         7 minutes ago       Exited              storage-provisioner       2                   18663e96ba841       storage-provisioner                         kube-system
	4e35a7433174e       aa5e3ebc0dfed                                                                                         7 minutes ago       Exited              coredns                   1                   c42fbed28717d       coredns-7d764666f9-j4t8w                    kube-system
	8ccc720180cb5       7bb6219ddab95                                                                                         7 minutes ago       Exited              kube-scheduler            1                   fb9985bca5bd0       kube-scheduler-functional-442811            kube-system
	79247fb2be94f       45f3cc72d235f                                                                                         7 minutes ago       Exited              kube-controller-manager   1                   8e814ef6acfa3       kube-controller-manager-functional-442811   kube-system
	ff1925c035bbc       a3e246e9556e9                                                                                         7 minutes ago       Exited              etcd                      1                   35d62fd2758ef       etcd-functional-442811                      kube-system
	c731d93d46317       8a4ded35a3eb1                                                                                         7 minutes ago       Exited              kube-proxy                1                   eda0c18c12a74       kube-proxy-d52sm                            kube-system
	
	
	==> coredns [1dbbaf0070f6] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:42546 - 63724 "HINFO IN 3795866867150485848.7821617772591164036. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.021891038s
	
	
	==> coredns [4e35a7433174] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:46129 - 64813 "HINFO IN 5819608903442519064.4892300565990545133. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.019403977s
	
	
	==> describe nodes <==
	Name:               functional-442811
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-442811
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5cff42f65f8043a145b28acc2164a21aaf35c47
	                    minikube.k8s.io/name=functional-442811
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_07T22_43_35_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 07 Dec 2025 22:43:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-442811
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 07 Dec 2025 22:52:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 07 Dec 2025 22:48:40 +0000   Sun, 07 Dec 2025 22:43:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 07 Dec 2025 22:48:40 +0000   Sun, 07 Dec 2025 22:43:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 07 Dec 2025 22:48:40 +0000   Sun, 07 Dec 2025 22:43:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 07 Dec 2025 22:48:40 +0000   Sun, 07 Dec 2025 22:43:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-442811
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                75fc37d3-ccae-49e7-9308-4a7688634355
	  Boot ID:                    10618540-d4ef-4c75-8cf1-8b1c0379fe5e
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://29.1.2
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-5758569b79-6bwdx                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
	  default                     hello-node-connect-9f67c86d4-gldsc            0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m20s
	  default                     mysql-844cf969f6-zm2lh                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     6m26s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m23s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m18s
	  kube-system                 coredns-7d764666f9-j4t8w                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     8m37s
	  kube-system                 etcd-functional-442811                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         8m44s
	  kube-system                 kube-apiserver-functional-442811              250m (3%)     0 (0%)      0 (0%)           0 (0%)         6m50s
	  kube-system                 kube-controller-manager-functional-442811     200m (2%)     0 (0%)      0 (0%)           0 (0%)         8m43s
	  kube-system                 kube-proxy-d52sm                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m37s
	  kube-system                 kube-scheduler-functional-442811              100m (1%)     0 (0%)      0 (0%)           0 (0%)         8m43s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m36s
	  kubernetes-dashboard        dashboard-metrics-scraper-5565989548-6kf2d    0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-chtwb          0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (16%)  700m (8%)
	  memory             682Mi (2%)   870Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  RegisteredNode  8m39s  node-controller  Node functional-442811 event: Registered Node functional-442811 in Controller
	  Normal  RegisteredNode  7m36s  node-controller  Node functional-442811 event: Registered Node functional-442811 in Controller
	  Normal  RegisteredNode  6m47s  node-controller  Node functional-442811 event: Registered Node functional-442811 in Controller
	
	
	==> dmesg <==
	[  +0.000025] ll header: 00000000: fe fa 5c 53 bf e3 96 63 b1 92 b7 e8 08 00
	[  +2.047884] IPv4: martian source 10.110.150.78 from 192.168.49.1, on dev br-0b711061d8c4
	[  +0.000024] ll header: 00000000: fe fa 5c 53 bf e3 96 63 b1 92 b7 e8 08 00
	[  +4.031738] IPv4: martian source 10.110.150.78 from 192.168.49.1, on dev br-0b711061d8c4
	[  +0.000022] ll header: 00000000: fe fa 5c 53 bf e3 96 63 b1 92 b7 e8 08 00
	[  +8.383561] IPv4: martian source 10.110.150.78 from 192.168.49.1, on dev br-0b711061d8c4
	[  +0.000008] ll header: 00000000: fe fa 5c 53 bf e3 96 63 b1 92 b7 e8 08 00
	[  +3.048952] IPv4: martian source 10.110.150.78 from 192.168.49.1, on dev br-0b711061d8c4
	[  +0.000008] ll header: 00000000: fe fa 5c 53 bf e3 96 63 b1 92 b7 e8 08 00
	[  +1.046793] IPv4: martian source 10.110.150.78 from 192.168.49.1, on dev br-0b711061d8c4
	[  +0.000008] ll header: 00000000: fe fa 5c 53 bf e3 96 63 b1 92 b7 e8 08 00
	[  +1.023934] IPv4: martian source 10.110.150.78 from 192.168.49.1, on dev br-0b711061d8c4
	[  +0.000022] ll header: 00000000: fe fa 5c 53 bf e3 96 63 b1 92 b7 e8 08 00
	[  +1.023938] IPv4: martian source 10.110.150.78 from 192.168.49.1, on dev br-0b711061d8c4
	[  +0.000007] ll header: 00000000: fe fa 5c 53 bf e3 96 63 b1 92 b7 e8 08 00
	[  +1.023928] IPv4: martian source 10.110.150.78 from 192.168.49.1, on dev br-0b711061d8c4
	[  +0.000023] ll header: 00000000: fe fa 5c 53 bf e3 96 63 b1 92 b7 e8 08 00
	[  +1.023939] IPv4: martian source 10.110.150.78 from 192.168.49.1, on dev br-0b711061d8c4
	[  +0.000008] ll header: 00000000: fe fa 5c 53 bf e3 96 63 b1 92 b7 e8 08 00
	[  +2.047870] IPv4: martian source 10.110.150.78 from 192.168.49.1, on dev br-0b711061d8c4
	[  +0.000008] ll header: 00000000: fe fa 5c 53 bf e3 96 63 b1 92 b7 e8 08 00
	[  +4.031775] IPv4: martian source 10.110.150.78 from 192.168.49.1, on dev br-0b711061d8c4
	[  +0.000024] ll header: 00000000: fe fa 5c 53 bf e3 96 63 b1 92 b7 e8 08 00
	[  +8.255538] IPv4: martian source 10.110.150.78 from 192.168.49.1, on dev br-0b711061d8c4
	[  +0.000025] ll header: 00000000: fe fa 5c 53 bf e3 96 63 b1 92 b7 e8 08 00
	
	
	==> etcd [83add6b3124c] <==
	{"level":"warn","ts":"2025-12-07T22:45:26.443524Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.456496Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.465229Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.471450Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.478374Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.484420Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.492052Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.498708Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.504875Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.511207Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.528806Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.541786Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.548777Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.554976Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.561284Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.568001Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.574155Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.580316Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.587757Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.594537Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.613389Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.619780Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.627493Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.633965Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.679295Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42392","server-name":"","error":"EOF"}
	
	
	==> etcd [ff1925c035bb] <==
	{"level":"warn","ts":"2025-12-07T22:44:37.422211Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.431053Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.437500Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.444765Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.450951Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.457308Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.464461Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.470302Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.476974Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.486685Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.492641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.499833Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.508638Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.515414Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.522831Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.529466Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.535768Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.541989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.548192Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.555143Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.580067Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.586763Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.593170Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.599554Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.644544Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49272","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 22:52:17 up  1:34,  0 user,  load average: 0.49, 0.30, 0.88
	Linux functional-442811 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [c9ff972b6d4d] <==
	I1207 22:45:27.111467       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1207 22:45:27.112240       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1207 22:45:27.113785       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1207 22:45:27.113805       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1207 22:45:27.115232       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1207 22:45:27.116901       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1207 22:45:27.127481       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1207 22:45:27.451017       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1207 22:45:27.451017       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1207 22:45:28.013053       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1207 22:45:28.426205       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1207 22:45:28.460038       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1207 22:45:28.480994       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1207 22:45:28.487143       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1207 22:45:30.606652       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1207 22:45:30.653920       1 controller.go:667] quota admission added evaluator for: endpoints
	I1207 22:45:30.704912       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1207 22:45:46.659446       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.110.200.238"}
	I1207 22:45:51.565249       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.111.181.88"}
	I1207 22:45:54.076127       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.100.134.119"}
	I1207 22:45:57.145808       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.99.16.235"}
	I1207 22:51:33.902860       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.110.151.143"}
	I1207 22:52:16.358533       1 controller.go:667] quota admission added evaluator for: namespaces
	I1207 22:52:16.463265       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.17.54"}
	I1207 22:52:16.477891       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.206.56"}
	
	
	==> kube-controller-manager [79247fb2be94] <==
	I1207 22:44:41.225515       1 shared_informer.go:377] "Caches are synced"
	I1207 22:44:41.225538       1 shared_informer.go:377] "Caches are synced"
	I1207 22:44:41.225327       1 shared_informer.go:377] "Caches are synced"
	I1207 22:44:41.225806       1 shared_informer.go:377] "Caches are synced"
	I1207 22:44:41.225845       1 shared_informer.go:377] "Caches are synced"
	I1207 22:44:41.225900       1 shared_informer.go:377] "Caches are synced"
	I1207 22:44:41.226080       1 shared_informer.go:377] "Caches are synced"
	I1207 22:44:41.226093       1 shared_informer.go:377] "Caches are synced"
	I1207 22:44:41.226237       1 shared_informer.go:377] "Caches are synced"
	I1207 22:44:41.226254       1 shared_informer.go:377] "Caches are synced"
	I1207 22:44:41.226316       1 shared_informer.go:377] "Caches are synced"
	I1207 22:44:41.226433       1 shared_informer.go:377] "Caches are synced"
	I1207 22:44:41.226705       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1207 22:44:41.226904       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="functional-442811"
	I1207 22:44:41.226954       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1207 22:44:41.227069       1 shared_informer.go:377] "Caches are synced"
	I1207 22:44:41.227104       1 shared_informer.go:377] "Caches are synced"
	I1207 22:44:41.227789       1 shared_informer.go:377] "Caches are synced"
	I1207 22:44:41.227880       1 shared_informer.go:377] "Caches are synced"
	I1207 22:44:41.227875       1 shared_informer.go:377] "Caches are synced"
	I1207 22:44:41.229278       1 shared_informer.go:377] "Caches are synced"
	I1207 22:44:41.322481       1 shared_informer.go:377] "Caches are synced"
	I1207 22:44:41.324646       1 shared_informer.go:377] "Caches are synced"
	I1207 22:44:41.324662       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1207 22:44:41.324666       1 garbagecollector.go:169] "Proceeding to collect garbage"
	
	
	==> kube-controller-manager [81b5af16f115] <==
	I1207 22:45:30.258361       1 shared_informer.go:377] "Caches are synced"
	I1207 22:45:30.258392       1 shared_informer.go:377] "Caches are synced"
	I1207 22:45:30.258433       1 range_allocator.go:177] "Sending events to api server"
	I1207 22:45:30.258518       1 shared_informer.go:377] "Caches are synced"
	I1207 22:45:30.258521       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1207 22:45:30.258698       1 shared_informer.go:370] "Waiting for caches to sync"
	I1207 22:45:30.258704       1 shared_informer.go:377] "Caches are synced"
	I1207 22:45:30.258623       1 shared_informer.go:377] "Caches are synced"
	I1207 22:45:30.258586       1 shared_informer.go:377] "Caches are synced"
	I1207 22:45:30.258614       1 shared_informer.go:377] "Caches are synced"
	I1207 22:45:30.258633       1 shared_informer.go:377] "Caches are synced"
	I1207 22:45:30.258627       1 shared_informer.go:377] "Caches are synced"
	I1207 22:45:30.258636       1 shared_informer.go:377] "Caches are synced"
	I1207 22:45:30.266934       1 shared_informer.go:370] "Waiting for caches to sync"
	I1207 22:45:30.266989       1 shared_informer.go:377] "Caches are synced"
	I1207 22:45:30.358127       1 shared_informer.go:377] "Caches are synced"
	I1207 22:45:30.358147       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1207 22:45:30.358154       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1207 22:45:30.367867       1 shared_informer.go:377] "Caches are synced"
	E1207 22:52:16.405357       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1207 22:52:16.409152       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1207 22:52:16.414778       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1207 22:52:16.416310       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1207 22:52:16.419791       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1207 22:52:16.422778       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [1241f85066cf] <==
	I1207 22:45:27.866139       1 server_linux.go:53] "Using iptables proxy"
	I1207 22:45:27.929383       1 shared_informer.go:370] "Waiting for caches to sync"
	I1207 22:45:28.029574       1 shared_informer.go:377] "Caches are synced"
	I1207 22:45:28.029634       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1207 22:45:28.029756       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1207 22:45:28.054488       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1207 22:45:28.054553       1 server_linux.go:136] "Using iptables Proxier"
	I1207 22:45:28.061247       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1207 22:45:28.061709       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1207 22:45:28.061749       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 22:45:28.063126       1 config.go:106] "Starting endpoint slice config controller"
	I1207 22:45:28.063159       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1207 22:45:28.063200       1 config.go:200] "Starting service config controller"
	I1207 22:45:28.063206       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1207 22:45:28.063199       1 config.go:403] "Starting serviceCIDR config controller"
	I1207 22:45:28.063226       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1207 22:45:28.063264       1 config.go:309] "Starting node config controller"
	I1207 22:45:28.063271       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1207 22:45:28.063278       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1207 22:45:28.163244       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1207 22:45:28.163313       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1207 22:45:28.163332       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [c731d93d4631] <==
	I1207 22:44:36.479925       1 server_linux.go:53] "Using iptables proxy"
	I1207 22:44:36.568255       1 shared_informer.go:370] "Waiting for caches to sync"
	I1207 22:44:38.168788       1 shared_informer.go:377] "Caches are synced"
	I1207 22:44:38.168826       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1207 22:44:38.169011       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1207 22:44:38.212997       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1207 22:44:38.213059       1 server_linux.go:136] "Using iptables Proxier"
	I1207 22:44:38.218592       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1207 22:44:38.219063       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1207 22:44:38.219101       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 22:44:38.220955       1 config.go:403] "Starting serviceCIDR config controller"
	I1207 22:44:38.220984       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1207 22:44:38.220986       1 config.go:106] "Starting endpoint slice config controller"
	I1207 22:44:38.221014       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1207 22:44:38.221097       1 config.go:309] "Starting node config controller"
	I1207 22:44:38.221103       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1207 22:44:38.221109       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1207 22:44:38.221482       1 config.go:200] "Starting service config controller"
	I1207 22:44:38.221666       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1207 22:44:38.321151       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1207 22:44:38.321157       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1207 22:44:38.322219       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [8ccc720180cb] <==
	I1207 22:44:36.923472       1 serving.go:386] Generated self-signed cert in-memory
	W1207 22:44:38.008074       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1207 22:44:38.008306       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1207 22:44:38.008328       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1207 22:44:38.008462       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1207 22:44:38.053370       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1207 22:44:38.053407       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 22:44:38.056294       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1207 22:44:38.056353       1 shared_informer.go:370] "Waiting for caches to sync"
	I1207 22:44:38.056369       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1207 22:44:38.056891       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1207 22:44:38.156767       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-scheduler [aeabb2a736d2] <==
	I1207 22:45:25.216680       1 serving.go:386] Generated self-signed cert in-memory
	W1207 22:45:27.046224       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1207 22:45:27.046264       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found, role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found]
	W1207 22:45:27.046277       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1207 22:45:27.046286       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1207 22:45:27.062035       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1207 22:45:27.062058       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 22:45:27.063812       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1207 22:45:27.063845       1 shared_informer.go:370] "Waiting for caches to sync"
	I1207 22:45:27.063924       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1207 22:45:27.064004       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1207 22:45:27.164533       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 07 22:51:59 functional-442811 kubelet[8486]: E1207 22:51:59.279080    8486 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-442811" containerName="etcd"
	Dec 07 22:52:00 functional-442811 kubelet[8486]: E1207 22:52:00.279828    8486 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-442811" containerName="kube-scheduler"
	Dec 07 22:52:03 functional-442811 kubelet[8486]: E1207 22:52:03.280347    8486 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-5758569b79-6bwdx" podUID="bb6e8c71-aaa3-4a46-9946-a5ac8718a889"
	Dec 07 22:52:04 functional-442811 kubelet[8486]: I1207 22:52:04.174097    8486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxvhm\" (UniqueName: \"kubernetes.io/projected/c841ff39-bf01-4031-910c-0e5b10ccf76f-kube-api-access-fxvhm\") pod \"busybox-mount\" (UID: \"c841ff39-bf01-4031-910c-0e5b10ccf76f\") " pod="default/busybox-mount"
	Dec 07 22:52:04 functional-442811 kubelet[8486]: I1207 22:52:04.174153    8486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/c841ff39-bf01-4031-910c-0e5b10ccf76f-test-volume\") pod \"busybox-mount\" (UID: \"c841ff39-bf01-4031-910c-0e5b10ccf76f\") " pod="default/busybox-mount"
	Dec 07 22:52:06 functional-442811 kubelet[8486]: E1207 22:52:06.279974    8486 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="a0787535-37fd-46f0-bd6d-f603d1557ee2"
	Dec 07 22:52:07 functional-442811 kubelet[8486]: E1207 22:52:07.281835    8486 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-844cf969f6-zm2lh" podUID="e691e40e-e4d5-4b7e-b852-ca016ddf9542"
	Dec 07 22:52:08 functional-442811 kubelet[8486]: E1207 22:52:08.280212    8486 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-9f67c86d4-gldsc" podUID="df13df8d-919f-420d-b0e4-4e5e489d4991"
	Dec 07 22:52:08 functional-442811 kubelet[8486]: I1207 22:52:08.602892    8486 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/c841ff39-bf01-4031-910c-0e5b10ccf76f-kube-api-access-fxvhm\" (UniqueName: \"kubernetes.io/projected/c841ff39-bf01-4031-910c-0e5b10ccf76f-kube-api-access-fxvhm\") pod \"c841ff39-bf01-4031-910c-0e5b10ccf76f\" (UID: \"c841ff39-bf01-4031-910c-0e5b10ccf76f\") "
	Dec 07 22:52:08 functional-442811 kubelet[8486]: I1207 22:52:08.602974    8486 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/c841ff39-bf01-4031-910c-0e5b10ccf76f-test-volume\" (UniqueName: \"kubernetes.io/host-path/c841ff39-bf01-4031-910c-0e5b10ccf76f-test-volume\") pod \"c841ff39-bf01-4031-910c-0e5b10ccf76f\" (UID: \"c841ff39-bf01-4031-910c-0e5b10ccf76f\") "
	Dec 07 22:52:08 functional-442811 kubelet[8486]: I1207 22:52:08.603076    8486 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c841ff39-bf01-4031-910c-0e5b10ccf76f-test-volume" pod "c841ff39-bf01-4031-910c-0e5b10ccf76f" (UID: "c841ff39-bf01-4031-910c-0e5b10ccf76f"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Dec 07 22:52:08 functional-442811 kubelet[8486]: I1207 22:52:08.605211    8486 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c841ff39-bf01-4031-910c-0e5b10ccf76f-kube-api-access-fxvhm" pod "c841ff39-bf01-4031-910c-0e5b10ccf76f" (UID: "c841ff39-bf01-4031-910c-0e5b10ccf76f"). InnerVolumeSpecName "kube-api-access-fxvhm". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 07 22:52:08 functional-442811 kubelet[8486]: I1207 22:52:08.703428    8486 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fxvhm\" (UniqueName: \"kubernetes.io/projected/c841ff39-bf01-4031-910c-0e5b10ccf76f-kube-api-access-fxvhm\") on node \"functional-442811\" DevicePath \"\""
	Dec 07 22:52:08 functional-442811 kubelet[8486]: I1207 22:52:08.703467    8486 reconciler_common.go:299] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/c841ff39-bf01-4031-910c-0e5b10ccf76f-test-volume\") on node \"functional-442811\" DevicePath \"\""
	Dec 07 22:52:09 functional-442811 kubelet[8486]: E1207 22:52:09.282288    8486 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="df055e75-dc3b-436d-9b15-60ec788da8a6"
	Dec 07 22:52:09 functional-442811 kubelet[8486]: I1207 22:52:09.316984    8486 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf598e6d0b5c892a95b9e4fedb15ba4c9e6ae0b05cd4a3f707c24736e379c5ac"
	Dec 07 22:52:13 functional-442811 kubelet[8486]: E1207 22:52:13.279283    8486 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-442811" containerName="kube-apiserver"
	Dec 07 22:52:16 functional-442811 kubelet[8486]: I1207 22:52:16.555243    8486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/04735f3d-d1e5-4bcc-b82f-7645afbc28fb-tmp-volume\") pod \"kubernetes-dashboard-b84665fb8-chtwb\" (UID: \"04735f3d-d1e5-4bcc-b82f-7645afbc28fb\") " pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-chtwb"
	Dec 07 22:52:16 functional-442811 kubelet[8486]: I1207 22:52:16.555298    8486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/5a9529c4-8a48-414f-8791-063053747e2e-tmp-volume\") pod \"dashboard-metrics-scraper-5565989548-6kf2d\" (UID: \"5a9529c4-8a48-414f-8791-063053747e2e\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-6kf2d"
	Dec 07 22:52:16 functional-442811 kubelet[8486]: I1207 22:52:16.555445    8486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbckm\" (UniqueName: \"kubernetes.io/projected/5a9529c4-8a48-414f-8791-063053747e2e-kube-api-access-tbckm\") pod \"dashboard-metrics-scraper-5565989548-6kf2d\" (UID: \"5a9529c4-8a48-414f-8791-063053747e2e\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-6kf2d"
	Dec 07 22:52:16 functional-442811 kubelet[8486]: I1207 22:52:16.555496    8486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cpx5\" (UniqueName: \"kubernetes.io/projected/04735f3d-d1e5-4bcc-b82f-7645afbc28fb-kube-api-access-4cpx5\") pod \"kubernetes-dashboard-b84665fb8-chtwb\" (UID: \"04735f3d-d1e5-4bcc-b82f-7645afbc28fb\") " pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-chtwb"
	Dec 07 22:52:17 functional-442811 kubelet[8486]: E1207 22:52:17.692196    8486 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Dec 07 22:52:17 functional-442811 kubelet[8486]: E1207 22:52:17.692261    8486 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Dec 07 22:52:17 functional-442811 kubelet[8486]: E1207 22:52:17.692620    8486 kuberuntime_manager.go:1664] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-5565989548-6kf2d_kubernetes-dashboard(5a9529c4-8a48-414f-8791-063053747e2e): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Dec 07 22:52:17 functional-442811 kubelet[8486]: E1207 22:52:17.692672    8486 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-6kf2d" podUID="5a9529c4-8a48-414f-8791-063053747e2e"
	
	
	==> storage-provisioner [0103126cf1fa] <==
	I1207 22:44:50.702907       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1207 22:44:50.702947       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1207 22:44:50.705054       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:44:54.159918       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:44:58.420300       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:02.019059       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:05.072830       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:08.094990       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:08.100139       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1207 22:45:08.100304       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1207 22:45:08.100474       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-442811_7469dd5e-c834-4569-8c4d-488a475d8a7b!
	I1207 22:45:08.100445       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6744592d-205c-493c-9eed-33025935219a", APIVersion:"v1", ResourceVersion:"558", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-442811_7469dd5e-c834-4569-8c4d-488a475d8a7b became leader
	W1207 22:45:08.102399       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:08.105316       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1207 22:45:08.200705       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-442811_7469dd5e-c834-4569-8c4d-488a475d8a7b!
	W1207 22:45:10.108971       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:10.113858       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:12.117456       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:12.121527       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:14.124245       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:14.128190       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:16.131409       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:16.136192       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:18.138850       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:18.142809       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [88ac0a9f9519] <==
	W1207 22:51:52.637454       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:51:54.640634       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:51:54.645497       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:51:56.648655       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:51:56.652276       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:51:58.657020       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:51:58.662012       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:52:00.666007       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:52:00.670057       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:52:02.674211       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:52:02.678179       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:52:04.681030       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:52:04.684670       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:52:06.687857       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:52:06.693192       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:52:08.696769       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:52:08.700447       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:52:10.704222       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:52:10.707858       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:52:12.711276       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:52:12.714911       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:52:14.718499       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:52:14.724631       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:52:16.728148       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:52:16.733886       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-442811 -n functional-442811
helpers_test.go:269: (dbg) Run:  kubectl --context functional-442811 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-5758569b79-6bwdx hello-node-connect-9f67c86d4-gldsc mysql-844cf969f6-zm2lh nginx-svc sp-pod dashboard-metrics-scraper-5565989548-6kf2d kubernetes-dashboard-b84665fb8-chtwb
helpers_test.go:282: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-442811 describe pod busybox-mount hello-node-5758569b79-6bwdx hello-node-connect-9f67c86d4-gldsc mysql-844cf969f6-zm2lh nginx-svc sp-pod dashboard-metrics-scraper-5565989548-6kf2d kubernetes-dashboard-b84665fb8-chtwb
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-442811 describe pod busybox-mount hello-node-5758569b79-6bwdx hello-node-connect-9f67c86d4-gldsc mysql-844cf969f6-zm2lh nginx-svc sp-pod dashboard-metrics-scraper-5565989548-6kf2d kubernetes-dashboard-b84665fb8-chtwb: exit status 1 (105.676394ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-442811/192.168.49.2
	Start Time:       Sun, 07 Dec 2025 22:52:04 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.13
	IPs:
	  IP:  10.244.0.13
	Containers:
	  mount-munger:
	    Container ID:  docker://e878acdb795753bebae0d3951e3f7b095f3224bc0a9d688b6e8a2b128fd36dac
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sun, 07 Dec 2025 22:52:06 +0000
	      Finished:     Sun, 07 Dec 2025 22:52:06 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fxvhm (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-fxvhm:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  14s   default-scheduler  Successfully assigned default/busybox-mount to functional-442811
	  Normal  Pulling    14s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     12s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.967s (1.967s including waiting). Image size: 4403845 bytes.
	  Normal  Created    12s   kubelet            Container created
	  Normal  Started    12s   kubelet            Container started
	
	
	Name:             hello-node-5758569b79-6bwdx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-442811/192.168.49.2
	Start Time:       Sun, 07 Dec 2025 22:51:33 +0000
	Labels:           app=hello-node
	                  pod-template-hash=5758569b79
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.12
	IPs:
	  IP:           10.244.0.12
	Controlled By:  ReplicaSet/hello-node-5758569b79
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vq8rd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-vq8rd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                From               Message
	  ----     ------     ----               ----               -------
	  Normal   Scheduled  44s                default-scheduler  Successfully assigned default/hello-node-5758569b79-6bwdx to functional-442811
	  Warning  Failed     29s (x2 over 43s)  kubelet            Failed to pull image "kicbase/echo-server": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     29s (x2 over 43s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    15s (x2 over 42s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     15s (x2 over 42s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    0s (x3 over 44s)   kubelet            Pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-9f67c86d4-gldsc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-442811/192.168.49.2
	Start Time:       Sun, 07 Dec 2025 22:45:57 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=9f67c86d4
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:           10.244.0.10
	Controlled By:  ReplicaSet/hello-node-connect-9f67c86d4
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dh5vh (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-dh5vh:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  6m21s                  default-scheduler  Successfully assigned default/hello-node-connect-9f67c86d4-gldsc to functional-442811
	  Normal   Pulling    3m27s (x5 over 6m21s)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     3m26s (x5 over 6m20s)  kubelet            Failed to pull image "kicbase/echo-server": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     3m26s (x5 over 6m20s)  kubelet            Error: ErrImagePull
	  Warning  Failed     78s (x20 over 6m20s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    65s (x21 over 6m20s)   kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             mysql-844cf969f6-zm2lh
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-442811/192.168.49.2
	Start Time:       Sun, 07 Dec 2025 22:45:51 +0000
	Labels:           app=mysql
	                  pod-template-hash=844cf969f6
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/mysql-844cf969f6
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-48z94 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-48z94:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  6m26s                  default-scheduler  Successfully assigned default/mysql-844cf969f6-zm2lh to functional-442811
	  Warning  Failed     6m25s                  kubelet            Failed to pull image "docker.io/mysql:5.7": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    3m37s (x5 over 6m26s)  kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     3m36s (x5 over 6m25s)  kubelet            Error: ErrImagePull
	  Warning  Failed     3m36s (x4 over 6m10s)  kubelet            Failed to pull image "docker.io/mysql:5.7": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     80s (x20 over 6m25s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    67s (x21 over 6m25s)   kubelet            Back-off pulling image "docker.io/mysql:5.7"
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-442811/192.168.49.2
	Start Time:       Sun, 07 Dec 2025 22:45:54 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-r8xnp (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-r8xnp:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  6m24s                  default-scheduler  Successfully assigned default/nginx-svc to functional-442811
	  Warning  Failed     4m50s                  kubelet            Failed to pull image "docker.io/nginx:alpine": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    3m18s (x5 over 6m24s)  kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     3m17s (x4 over 6m23s)  kubelet            Failed to pull image "docker.io/nginx:alpine": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     3m17s (x5 over 6m23s)  kubelet            Error: ErrImagePull
	  Warning  Failed     72s (x20 over 6m23s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    57s (x21 over 6m23s)   kubelet            Back-off pulling image "docker.io/nginx:alpine"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-442811/192.168.49.2
	Start Time:       Sun, 07 Dec 2025 22:45:59 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vt2bl (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-vt2bl:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  6m19s                  default-scheduler  Successfully assigned default/sp-pod to functional-442811
	  Normal   Pulling    3m31s (x5 over 6m19s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     3m30s (x5 over 6m18s)  kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     3m30s (x5 over 6m18s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    76s (x21 over 6m18s)   kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     76s (x21 over 6m18s)   kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-5565989548-6kf2d" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-chtwb" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-442811 describe pod busybox-mount hello-node-5758569b79-6bwdx hello-node-connect-9f67c86d4-gldsc mysql-844cf969f6-zm2lh nginx-svc sp-pod dashboard-metrics-scraper-5565989548-6kf2d kubernetes-dashboard-b84665fb8-chtwb: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (3.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (602.66s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-442811 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-442811 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-9f67c86d4-gldsc" [df13df8d-919f-420d-b0e4-4e5e489d4991] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-442811 -n functional-442811
functional_test.go:1645: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-12-07 22:55:57.47139973 +0000 UTC m=+1587.757452312
functional_test.go:1645: (dbg) Run:  kubectl --context functional-442811 describe po hello-node-connect-9f67c86d4-gldsc -n default
functional_test.go:1645: (dbg) kubectl --context functional-442811 describe po hello-node-connect-9f67c86d4-gldsc -n default:
Name:             hello-node-connect-9f67c86d4-gldsc
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-442811/192.168.49.2
Start Time:       Sun, 07 Dec 2025 22:45:57 +0000
Labels:           app=hello-node-connect
pod-template-hash=9f67c86d4
Annotations:      <none>
Status:           Pending
IP:               10.244.0.10
IPs:
IP:           10.244.0.10
Controlled By:  ReplicaSet/hello-node-connect-9f67c86d4
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dh5vh (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-dh5vh:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-9f67c86d4-gldsc to functional-442811
Normal   Pulling    7m6s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m5s (x5 over 9m59s)    kubelet            Failed to pull image "kicbase/echo-server": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     7m5s (x5 over 9m59s)    kubelet            Error: ErrImagePull
Warning  Failed     4m57s (x20 over 9m59s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m44s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-442811 logs hello-node-connect-9f67c86d4-gldsc -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-442811 logs hello-node-connect-9f67c86d4-gldsc -n default: exit status 1 (66.276276ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-9f67c86d4-gldsc" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-442811 logs hello-node-connect-9f67c86d4-gldsc -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-442811 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-9f67c86d4-gldsc
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-442811/192.168.49.2
Start Time:       Sun, 07 Dec 2025 22:45:57 +0000
Labels:           app=hello-node-connect
pod-template-hash=9f67c86d4
Annotations:      <none>
Status:           Pending
IP:               10.244.0.10
IPs:
IP:           10.244.0.10
Controlled By:  ReplicaSet/hello-node-connect-9f67c86d4
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dh5vh (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-dh5vh:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-9f67c86d4-gldsc to functional-442811
Normal   Pulling    7m6s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m5s (x5 over 9m59s)    kubelet            Failed to pull image "kicbase/echo-server": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     7m5s (x5 over 9m59s)    kubelet            Error: ErrImagePull
Warning  Failed     4m57s (x20 over 9m59s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m44s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-442811 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-442811 logs -l app=hello-node-connect: exit status 1 (62.678268ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-9f67c86d4-gldsc" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-442811 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-442811 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.99.16.235
IPs:                      10.99.16.235
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30178/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-442811
helpers_test.go:243: (dbg) docker inspect functional-442811:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1699178f8a7c3755a5daa3ce9dad5f01448b5919784eccd2fc40c26ff389c762",
	        "Created": "2025-12-07T22:43:22.049081307Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 456864,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-07T22:43:22.089713066Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
	        "ResolvConfPath": "/var/lib/docker/containers/1699178f8a7c3755a5daa3ce9dad5f01448b5919784eccd2fc40c26ff389c762/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1699178f8a7c3755a5daa3ce9dad5f01448b5919784eccd2fc40c26ff389c762/hostname",
	        "HostsPath": "/var/lib/docker/containers/1699178f8a7c3755a5daa3ce9dad5f01448b5919784eccd2fc40c26ff389c762/hosts",
	        "LogPath": "/var/lib/docker/containers/1699178f8a7c3755a5daa3ce9dad5f01448b5919784eccd2fc40c26ff389c762/1699178f8a7c3755a5daa3ce9dad5f01448b5919784eccd2fc40c26ff389c762-json.log",
	        "Name": "/functional-442811",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-442811:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-442811",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1699178f8a7c3755a5daa3ce9dad5f01448b5919784eccd2fc40c26ff389c762",
	                "LowerDir": "/var/lib/docker/overlay2/5d7c45936f6119213b3285a7f6a06509ba6a63e767162da1ee6f414e72615470-init/diff:/var/lib/docker/overlay2/72e2c0d34d3438044c6ca8754190358557351efc0aeb527bd1060ce52e748152/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5d7c45936f6119213b3285a7f6a06509ba6a63e767162da1ee6f414e72615470/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5d7c45936f6119213b3285a7f6a06509ba6a63e767162da1ee6f414e72615470/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5d7c45936f6119213b3285a7f6a06509ba6a63e767162da1ee6f414e72615470/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-442811",
	                "Source": "/var/lib/docker/volumes/functional-442811/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-442811",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-442811",
	                "name.minikube.sigs.k8s.io": "functional-442811",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "31fe22828acd889527928fad9cf9de4644c6693bf1715496a16bc2b07706d2c3",
	            "SandboxKey": "/var/run/docker/netns/31fe22828acd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33167"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33168"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33171"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33169"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33170"
	                    }
	                ]
	            },
	            "Networks": {
	                "functional-442811": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0b711061d8c4c0d449da65ff00c005cf89c83f72d15bf795a0f752ebfb4033e6",
	                    "EndpointID": "056c3331b34f02b412809772934c779a89252a7366e762ac002e00a13fc17922",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "96:63:b1:92:b7:e8",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-442811",
	                        "1699178f8a7c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-442811 -n functional-442811
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-442811 logs -n 25: (1.019947767s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                      ARGS                                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-442811 ssh -- ls -la /mount-9p                                                                                                       │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │ 07 Dec 25 22:52 UTC │
	│ ssh            │ functional-442811 ssh sudo umount -f /mount-9p                                                                                                  │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │                     │
	│ mount          │ -p functional-442811 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo68706730/001:/mount3 --alsologtostderr -v=1              │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │                     │
	│ mount          │ -p functional-442811 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo68706730/001:/mount2 --alsologtostderr -v=1              │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │                     │
	│ ssh            │ functional-442811 ssh findmnt -T /mount1                                                                                                        │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │                     │
	│ mount          │ -p functional-442811 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo68706730/001:/mount1 --alsologtostderr -v=1              │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │                     │
	│ ssh            │ functional-442811 ssh findmnt -T /mount1                                                                                                        │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │ 07 Dec 25 22:52 UTC │
	│ ssh            │ functional-442811 ssh findmnt -T /mount2                                                                                                        │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │ 07 Dec 25 22:52 UTC │
	│ ssh            │ functional-442811 ssh findmnt -T /mount3                                                                                                        │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │ 07 Dec 25 22:52 UTC │
	│ mount          │ -p functional-442811 --kill=true                                                                                                                │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │                     │
	│ start          │ -p functional-442811 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-beta.0 │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │                     │
	│ start          │ -p functional-442811 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-beta.0 │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │                     │
	│ start          │ -p functional-442811 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-beta.0           │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-442811 --alsologtostderr -v=1                                                                                  │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │                     │
	│ license        │                                                                                                                                                 │ minikube          │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │ 07 Dec 25 22:52 UTC │
	│ update-context │ functional-442811 update-context --alsologtostderr -v=2                                                                                         │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │ 07 Dec 25 22:52 UTC │
	│ update-context │ functional-442811 update-context --alsologtostderr -v=2                                                                                         │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │ 07 Dec 25 22:52 UTC │
	│ update-context │ functional-442811 update-context --alsologtostderr -v=2                                                                                         │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │ 07 Dec 25 22:52 UTC │
	│ image          │ functional-442811 image ls --format short --alsologtostderr                                                                                     │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │ 07 Dec 25 22:52 UTC │
	│ image          │ functional-442811 image ls --format yaml --alsologtostderr                                                                                      │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │ 07 Dec 25 22:52 UTC │
	│ ssh            │ functional-442811 ssh pgrep buildkitd                                                                                                           │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │                     │
	│ image          │ functional-442811 image build -t localhost/my-image:functional-442811 testdata/build --alsologtostderr                                          │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │ 07 Dec 25 22:52 UTC │
	│ image          │ functional-442811 image ls                                                                                                                      │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │ 07 Dec 25 22:52 UTC │
	│ image          │ functional-442811 image ls --format json --alsologtostderr                                                                                      │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │ 07 Dec 25 22:52 UTC │
	│ image          │ functional-442811 image ls --format table --alsologtostderr                                                                                     │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │ 07 Dec 25 22:52 UTC │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/07 22:52:15
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1207 22:52:15.351962  481599 out.go:360] Setting OutFile to fd 1 ...
	I1207 22:52:15.352191  481599 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:52:15.352201  481599 out.go:374] Setting ErrFile to fd 2...
	I1207 22:52:15.352205  481599 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:52:15.352433  481599 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-393577/.minikube/bin
	I1207 22:52:15.352871  481599 out.go:368] Setting JSON to false
	I1207 22:52:15.353910  481599 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":5678,"bootTime":1765142257,"procs":235,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 22:52:15.353965  481599 start.go:143] virtualization: kvm guest
	I1207 22:52:15.355654  481599 out.go:179] * [functional-442811] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1207 22:52:15.357025  481599 out.go:179]   - MINIKUBE_LOCATION=22054
	I1207 22:52:15.357048  481599 notify.go:221] Checking for updates...
	I1207 22:52:15.359227  481599 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 22:52:15.360452  481599 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-393577/kubeconfig
	I1207 22:52:15.361525  481599 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-393577/.minikube
	I1207 22:52:15.362415  481599 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1207 22:52:15.363417  481599 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 22:52:15.364856  481599 config.go:182] Loaded profile config "functional-442811": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1207 22:52:15.365411  481599 driver.go:422] Setting default libvirt URI to qemu:///system
	I1207 22:52:15.388991  481599 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1207 22:52:15.389173  481599 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 22:52:15.446662  481599 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:54 SystemTime:2025-12-07 22:52:15.436555194 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 22:52:15.446764  481599 docker.go:319] overlay module found
	I1207 22:52:15.448725  481599 out.go:179] * Using the docker driver based on existing profile
	I1207 22:52:15.449760  481599 start.go:309] selected driver: docker
	I1207 22:52:15.449777  481599 start.go:927] validating driver "docker" against &{Name:functional-442811 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-442811 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mo
untOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 22:52:15.449897  481599 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 22:52:15.450002  481599 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 22:52:15.506317  481599 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:54 SystemTime:2025-12-07 22:52:15.497105264 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 22:52:15.507092  481599 cni.go:84] Creating CNI manager for ""
	I1207 22:52:15.507183  481599 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1207 22:52:15.507250  481599 start.go:353] cluster config:
	{Name:functional-442811 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-442811 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 22:52:15.509831  481599 out.go:179] * dry-run validation complete!
	
	
	==> Docker <==
	Dec 07 22:52:17 functional-442811 dockerd[7407]: time="2025-12-07T22:52:17.203234705Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Dec 07 22:52:17 functional-442811 dockerd[7407]: time="2025-12-07T22:52:17.689649853Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 07 22:52:17 functional-442811 dockerd[7407]: time="2025-12-07T22:52:17.927471618Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Dec 07 22:52:18 functional-442811 dockerd[7407]: time="2025-12-07T22:52:18.410208645Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 07 22:52:19 functional-442811 dockerd[7407]: time="2025-12-07T22:52:19.413243936Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 07 22:52:23 functional-442811 dockerd[7407]: time="2025-12-07T22:52:23.298913078Z" level=info msg="sbJoin: gwep4 ''->'ade309e1b37d', gwep6 ''->''"
	Dec 07 22:52:30 functional-442811 dockerd[7407]: time="2025-12-07T22:52:30.524525032Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Dec 07 22:52:31 functional-442811 dockerd[7407]: time="2025-12-07T22:52:31.003907059Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 07 22:52:33 functional-442811 dockerd[7407]: time="2025-12-07T22:52:33.519999508Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Dec 07 22:52:34 functional-442811 dockerd[7407]: time="2025-12-07T22:52:34.000741842Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 07 22:52:57 functional-442811 dockerd[7407]: time="2025-12-07T22:52:57.521343976Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Dec 07 22:52:57 functional-442811 dockerd[7407]: time="2025-12-07T22:52:57.996442965Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 07 22:53:03 functional-442811 dockerd[7407]: time="2025-12-07T22:53:03.518951686Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Dec 07 22:53:04 functional-442811 dockerd[7407]: time="2025-12-07T22:53:04.003130172Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 07 22:53:10 functional-442811 dockerd[7407]: time="2025-12-07T22:53:10.260438894Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 07 22:53:44 functional-442811 dockerd[7407]: time="2025-12-07T22:53:44.519195114Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Dec 07 22:53:45 functional-442811 dockerd[7407]: time="2025-12-07T22:53:45.285735209Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 07 22:53:45 functional-442811 cri-dockerd[7726]: time="2025-12-07T22:53:45Z" level=info msg="Stop pulling image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: Pulling from kubernetesui/dashboard"
	Dec 07 22:53:48 functional-442811 dockerd[7407]: time="2025-12-07T22:53:48.516413636Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Dec 07 22:53:48 functional-442811 dockerd[7407]: time="2025-12-07T22:53:48.991029698Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 07 22:54:40 functional-442811 dockerd[7407]: time="2025-12-07T22:54:40.282328309Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 07 22:55:06 functional-442811 dockerd[7407]: time="2025-12-07T22:55:06.522724087Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Dec 07 22:55:07 functional-442811 dockerd[7407]: time="2025-12-07T22:55:07.010549287Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 07 22:55:13 functional-442811 dockerd[7407]: time="2025-12-07T22:55:13.522616336Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Dec 07 22:55:14 functional-442811 dockerd[7407]: time="2025-12-07T22:55:14.004231447Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	e878acdb79575       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   3 minutes ago       Exited              mount-munger              0                   cf598e6d0b5c8       busybox-mount                               default
	1dbbaf0070f63       aa5e3ebc0dfed                                                                                         10 minutes ago      Running             coredns                   2                   ab34a5f620355       coredns-7d764666f9-j4t8w                    kube-system
	88ac0a9f95196       6e38f40d628db                                                                                         10 minutes ago      Running             storage-provisioner       3                   8edd26c4e4b1f       storage-provisioner                         kube-system
	1241f85066cfe       8a4ded35a3eb1                                                                                         10 minutes ago      Running             kube-proxy                2                   f9a2027ed4081       kube-proxy-d52sm                            kube-system
	83add6b3124c5       a3e246e9556e9                                                                                         10 minutes ago      Running             etcd                      2                   34b1626cbb491       etcd-functional-442811                      kube-system
	aeabb2a736d2e       7bb6219ddab95                                                                                         10 minutes ago      Running             kube-scheduler            2                   582a63f870e0e       kube-scheduler-functional-442811            kube-system
	81b5af16f1153       45f3cc72d235f                                                                                         10 minutes ago      Running             kube-controller-manager   2                   4405a0223b979       kube-controller-manager-functional-442811   kube-system
	c9ff972b6d4dd       aa9d02839d8de                                                                                         10 minutes ago      Running             kube-apiserver            0                   f2eb5859b259b       kube-apiserver-functional-442811            kube-system
	0103126cf1fac       6e38f40d628db                                                                                         11 minutes ago      Exited              storage-provisioner       2                   18663e96ba841       storage-provisioner                         kube-system
	4e35a7433174e       aa5e3ebc0dfed                                                                                         11 minutes ago      Exited              coredns                   1                   c42fbed28717d       coredns-7d764666f9-j4t8w                    kube-system
	8ccc720180cb5       7bb6219ddab95                                                                                         11 minutes ago      Exited              kube-scheduler            1                   fb9985bca5bd0       kube-scheduler-functional-442811            kube-system
	79247fb2be94f       45f3cc72d235f                                                                                         11 minutes ago      Exited              kube-controller-manager   1                   8e814ef6acfa3       kube-controller-manager-functional-442811   kube-system
	ff1925c035bbc       a3e246e9556e9                                                                                         11 minutes ago      Exited              etcd                      1                   35d62fd2758ef       etcd-functional-442811                      kube-system
	c731d93d46317       8a4ded35a3eb1                                                                                         11 minutes ago      Exited              kube-proxy                1                   eda0c18c12a74       kube-proxy-d52sm                            kube-system
	
	
	==> coredns [1dbbaf0070f6] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:42546 - 63724 "HINFO IN 3795866867150485848.7821617772591164036. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.021891038s
	
	
	==> coredns [4e35a7433174] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:46129 - 64813 "HINFO IN 5819608903442519064.4892300565990545133. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.019403977s
	
	
	==> describe nodes <==
	Name:               functional-442811
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-442811
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5cff42f65f8043a145b28acc2164a21aaf35c47
	                    minikube.k8s.io/name=functional-442811
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_07T22_43_35_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 07 Dec 2025 22:43:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-442811
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 07 Dec 2025 22:55:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 07 Dec 2025 22:52:25 +0000   Sun, 07 Dec 2025 22:43:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 07 Dec 2025 22:52:25 +0000   Sun, 07 Dec 2025 22:43:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 07 Dec 2025 22:52:25 +0000   Sun, 07 Dec 2025 22:43:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 07 Dec 2025 22:52:25 +0000   Sun, 07 Dec 2025 22:43:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-442811
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                75fc37d3-ccae-49e7-9308-4a7688634355
	  Boot ID:                    10618540-d4ef-4c75-8cf1-8b1c0379fe5e
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://29.1.2
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-5758569b79-6bwdx                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m25s
	  default                     hello-node-connect-9f67c86d4-gldsc            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-844cf969f6-zm2lh                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     10m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m59s
	  kube-system                 coredns-7d764666f9-j4t8w                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     12m
	  kube-system                 etcd-functional-442811                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kube-apiserver-functional-442811              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-442811     200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-d52sm                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-442811              100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-5565989548-6kf2d    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m42s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-chtwb          0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (16%)  700m (8%)
	  memory             682Mi (2%)   870Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  12m   node-controller  Node functional-442811 event: Registered Node functional-442811 in Controller
	  Normal  RegisteredNode  11m   node-controller  Node functional-442811 event: Registered Node functional-442811 in Controller
	  Normal  RegisteredNode  10m   node-controller  Node functional-442811 event: Registered Node functional-442811 in Controller
	
	
	==> dmesg <==
	[  +0.000025] ll header: 00000000: fe fa 5c 53 bf e3 96 63 b1 92 b7 e8 08 00
	[  +2.047884] IPv4: martian source 10.110.150.78 from 192.168.49.1, on dev br-0b711061d8c4
	[  +0.000024] ll header: 00000000: fe fa 5c 53 bf e3 96 63 b1 92 b7 e8 08 00
	[  +4.031738] IPv4: martian source 10.110.150.78 from 192.168.49.1, on dev br-0b711061d8c4
	[  +0.000022] ll header: 00000000: fe fa 5c 53 bf e3 96 63 b1 92 b7 e8 08 00
	[  +8.383561] IPv4: martian source 10.110.150.78 from 192.168.49.1, on dev br-0b711061d8c4
	[  +0.000008] ll header: 00000000: fe fa 5c 53 bf e3 96 63 b1 92 b7 e8 08 00
	[  +3.048952] IPv4: martian source 10.110.150.78 from 192.168.49.1, on dev br-0b711061d8c4
	[  +0.000008] ll header: 00000000: fe fa 5c 53 bf e3 96 63 b1 92 b7 e8 08 00
	[  +1.046793] IPv4: martian source 10.110.150.78 from 192.168.49.1, on dev br-0b711061d8c4
	[  +0.000008] ll header: 00000000: fe fa 5c 53 bf e3 96 63 b1 92 b7 e8 08 00
	[  +1.023934] IPv4: martian source 10.110.150.78 from 192.168.49.1, on dev br-0b711061d8c4
	[  +0.000022] ll header: 00000000: fe fa 5c 53 bf e3 96 63 b1 92 b7 e8 08 00
	[  +1.023938] IPv4: martian source 10.110.150.78 from 192.168.49.1, on dev br-0b711061d8c4
	[  +0.000007] ll header: 00000000: fe fa 5c 53 bf e3 96 63 b1 92 b7 e8 08 00
	[  +1.023928] IPv4: martian source 10.110.150.78 from 192.168.49.1, on dev br-0b711061d8c4
	[  +0.000023] ll header: 00000000: fe fa 5c 53 bf e3 96 63 b1 92 b7 e8 08 00
	[  +1.023939] IPv4: martian source 10.110.150.78 from 192.168.49.1, on dev br-0b711061d8c4
	[  +0.000008] ll header: 00000000: fe fa 5c 53 bf e3 96 63 b1 92 b7 e8 08 00
	[  +2.047870] IPv4: martian source 10.110.150.78 from 192.168.49.1, on dev br-0b711061d8c4
	[  +0.000008] ll header: 00000000: fe fa 5c 53 bf e3 96 63 b1 92 b7 e8 08 00
	[  +4.031775] IPv4: martian source 10.110.150.78 from 192.168.49.1, on dev br-0b711061d8c4
	[  +0.000024] ll header: 00000000: fe fa 5c 53 bf e3 96 63 b1 92 b7 e8 08 00
	[  +8.255538] IPv4: martian source 10.110.150.78 from 192.168.49.1, on dev br-0b711061d8c4
	[  +0.000025] ll header: 00000000: fe fa 5c 53 bf e3 96 63 b1 92 b7 e8 08 00
	
	
	==> etcd [83add6b3124c] <==
	{"level":"warn","ts":"2025-12-07T22:45:26.471450Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.478374Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.484420Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.492052Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.498708Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.504875Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.511207Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.528806Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.541786Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.548777Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.554976Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.561284Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.568001Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.574155Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.580316Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.587757Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.594537Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.613389Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.619780Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.627493Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.633965Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.679295Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42392","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-07T22:55:26.195035Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1217}
	{"level":"info","ts":"2025-12-07T22:55:26.214333Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1217,"took":"18.921522ms","hash":2299821684,"current-db-size-bytes":4055040,"current-db-size":"4.1 MB","current-db-size-in-use-bytes":2117632,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2025-12-07T22:55:26.214380Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2299821684,"revision":1217,"compact-revision":-1}
	
	
	==> etcd [ff1925c035bb] <==
	{"level":"warn","ts":"2025-12-07T22:44:37.422211Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.431053Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.437500Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.444765Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.450951Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.457308Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.464461Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.470302Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.476974Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.486685Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.492641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.499833Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.508638Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.515414Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.522831Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.529466Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.535768Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.541989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.548192Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.555143Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.580067Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.586763Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.593170Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.599554Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.644544Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49272","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 22:55:58 up  1:38,  0 user,  load average: 0.02, 0.16, 0.70
	Linux functional-442811 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [c9ff972b6d4d] <==
	I1207 22:45:27.112240       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1207 22:45:27.113785       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1207 22:45:27.113805       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1207 22:45:27.115232       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1207 22:45:27.116901       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1207 22:45:27.127481       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1207 22:45:27.451017       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1207 22:45:27.451017       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1207 22:45:28.013053       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1207 22:45:28.426205       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1207 22:45:28.460038       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1207 22:45:28.480994       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1207 22:45:28.487143       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1207 22:45:30.606652       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1207 22:45:30.653920       1 controller.go:667] quota admission added evaluator for: endpoints
	I1207 22:45:30.704912       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1207 22:45:46.659446       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.110.200.238"}
	I1207 22:45:51.565249       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.111.181.88"}
	I1207 22:45:54.076127       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.100.134.119"}
	I1207 22:45:57.145808       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.99.16.235"}
	I1207 22:51:33.902860       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.110.151.143"}
	I1207 22:52:16.358533       1 controller.go:667] quota admission added evaluator for: namespaces
	I1207 22:52:16.463265       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.17.54"}
	I1207 22:52:16.477891       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.206.56"}
	I1207 22:55:27.045591       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [79247fb2be94] <==
	I1207 22:44:41.225515       1 shared_informer.go:377] "Caches are synced"
	I1207 22:44:41.225538       1 shared_informer.go:377] "Caches are synced"
	I1207 22:44:41.225327       1 shared_informer.go:377] "Caches are synced"
	I1207 22:44:41.225806       1 shared_informer.go:377] "Caches are synced"
	I1207 22:44:41.225845       1 shared_informer.go:377] "Caches are synced"
	I1207 22:44:41.225900       1 shared_informer.go:377] "Caches are synced"
	I1207 22:44:41.226080       1 shared_informer.go:377] "Caches are synced"
	I1207 22:44:41.226093       1 shared_informer.go:377] "Caches are synced"
	I1207 22:44:41.226237       1 shared_informer.go:377] "Caches are synced"
	I1207 22:44:41.226254       1 shared_informer.go:377] "Caches are synced"
	I1207 22:44:41.226316       1 shared_informer.go:377] "Caches are synced"
	I1207 22:44:41.226433       1 shared_informer.go:377] "Caches are synced"
	I1207 22:44:41.226705       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1207 22:44:41.226904       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="functional-442811"
	I1207 22:44:41.226954       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1207 22:44:41.227069       1 shared_informer.go:377] "Caches are synced"
	I1207 22:44:41.227104       1 shared_informer.go:377] "Caches are synced"
	I1207 22:44:41.227789       1 shared_informer.go:377] "Caches are synced"
	I1207 22:44:41.227880       1 shared_informer.go:377] "Caches are synced"
	I1207 22:44:41.227875       1 shared_informer.go:377] "Caches are synced"
	I1207 22:44:41.229278       1 shared_informer.go:377] "Caches are synced"
	I1207 22:44:41.322481       1 shared_informer.go:377] "Caches are synced"
	I1207 22:44:41.324646       1 shared_informer.go:377] "Caches are synced"
	I1207 22:44:41.324662       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1207 22:44:41.324666       1 garbagecollector.go:169] "Proceeding to collect garbage"
	
	
	==> kube-controller-manager [81b5af16f115] <==
	I1207 22:45:30.258361       1 shared_informer.go:377] "Caches are synced"
	I1207 22:45:30.258392       1 shared_informer.go:377] "Caches are synced"
	I1207 22:45:30.258433       1 range_allocator.go:177] "Sending events to api server"
	I1207 22:45:30.258518       1 shared_informer.go:377] "Caches are synced"
	I1207 22:45:30.258521       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1207 22:45:30.258698       1 shared_informer.go:370] "Waiting for caches to sync"
	I1207 22:45:30.258704       1 shared_informer.go:377] "Caches are synced"
	I1207 22:45:30.258623       1 shared_informer.go:377] "Caches are synced"
	I1207 22:45:30.258586       1 shared_informer.go:377] "Caches are synced"
	I1207 22:45:30.258614       1 shared_informer.go:377] "Caches are synced"
	I1207 22:45:30.258633       1 shared_informer.go:377] "Caches are synced"
	I1207 22:45:30.258627       1 shared_informer.go:377] "Caches are synced"
	I1207 22:45:30.258636       1 shared_informer.go:377] "Caches are synced"
	I1207 22:45:30.266934       1 shared_informer.go:370] "Waiting for caches to sync"
	I1207 22:45:30.266989       1 shared_informer.go:377] "Caches are synced"
	I1207 22:45:30.358127       1 shared_informer.go:377] "Caches are synced"
	I1207 22:45:30.358147       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1207 22:45:30.358154       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1207 22:45:30.367867       1 shared_informer.go:377] "Caches are synced"
	E1207 22:52:16.405357       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1207 22:52:16.409152       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1207 22:52:16.414778       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1207 22:52:16.416310       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1207 22:52:16.419791       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1207 22:52:16.422778       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [1241f85066cf] <==
	I1207 22:45:27.866139       1 server_linux.go:53] "Using iptables proxy"
	I1207 22:45:27.929383       1 shared_informer.go:370] "Waiting for caches to sync"
	I1207 22:45:28.029574       1 shared_informer.go:377] "Caches are synced"
	I1207 22:45:28.029634       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1207 22:45:28.029756       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1207 22:45:28.054488       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1207 22:45:28.054553       1 server_linux.go:136] "Using iptables Proxier"
	I1207 22:45:28.061247       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1207 22:45:28.061709       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1207 22:45:28.061749       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 22:45:28.063126       1 config.go:106] "Starting endpoint slice config controller"
	I1207 22:45:28.063159       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1207 22:45:28.063200       1 config.go:200] "Starting service config controller"
	I1207 22:45:28.063206       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1207 22:45:28.063199       1 config.go:403] "Starting serviceCIDR config controller"
	I1207 22:45:28.063226       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1207 22:45:28.063264       1 config.go:309] "Starting node config controller"
	I1207 22:45:28.063271       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1207 22:45:28.063278       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1207 22:45:28.163244       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1207 22:45:28.163313       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1207 22:45:28.163332       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [c731d93d4631] <==
	I1207 22:44:36.479925       1 server_linux.go:53] "Using iptables proxy"
	I1207 22:44:36.568255       1 shared_informer.go:370] "Waiting for caches to sync"
	I1207 22:44:38.168788       1 shared_informer.go:377] "Caches are synced"
	I1207 22:44:38.168826       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1207 22:44:38.169011       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1207 22:44:38.212997       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1207 22:44:38.213059       1 server_linux.go:136] "Using iptables Proxier"
	I1207 22:44:38.218592       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1207 22:44:38.219063       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1207 22:44:38.219101       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 22:44:38.220955       1 config.go:403] "Starting serviceCIDR config controller"
	I1207 22:44:38.220984       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1207 22:44:38.220986       1 config.go:106] "Starting endpoint slice config controller"
	I1207 22:44:38.221014       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1207 22:44:38.221097       1 config.go:309] "Starting node config controller"
	I1207 22:44:38.221103       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1207 22:44:38.221109       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1207 22:44:38.221482       1 config.go:200] "Starting service config controller"
	I1207 22:44:38.221666       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1207 22:44:38.321151       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1207 22:44:38.321157       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1207 22:44:38.322219       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [8ccc720180cb] <==
	I1207 22:44:36.923472       1 serving.go:386] Generated self-signed cert in-memory
	W1207 22:44:38.008074       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1207 22:44:38.008306       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1207 22:44:38.008328       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1207 22:44:38.008462       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1207 22:44:38.053370       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1207 22:44:38.053407       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 22:44:38.056294       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1207 22:44:38.056353       1 shared_informer.go:370] "Waiting for caches to sync"
	I1207 22:44:38.056369       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1207 22:44:38.056891       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1207 22:44:38.156767       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-scheduler [aeabb2a736d2] <==
	I1207 22:45:25.216680       1 serving.go:386] Generated self-signed cert in-memory
	W1207 22:45:27.046224       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1207 22:45:27.046264       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found, role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found]
	W1207 22:45:27.046277       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1207 22:45:27.046286       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1207 22:45:27.062035       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1207 22:45:27.062058       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 22:45:27.063812       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1207 22:45:27.063845       1 shared_informer.go:370] "Waiting for caches to sync"
	I1207 22:45:27.063924       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1207 22:45:27.064004       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1207 22:45:27.164533       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 07 22:55:26 functional-442811 kubelet[8486]: E1207 22:55:26.281769    8486 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-844cf969f6-zm2lh" podUID="e691e40e-e4d5-4b7e-b852-ca016ddf9542"
	Dec 07 22:55:28 functional-442811 kubelet[8486]: E1207 22:55:28.279341    8486 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-6kf2d" containerName="dashboard-metrics-scraper"
	Dec 07 22:55:28 functional-442811 kubelet[8486]: E1207 22:55:28.282156    8486 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-6kf2d" podUID="5a9529c4-8a48-414f-8791-063053747e2e"
	Dec 07 22:55:29 functional-442811 kubelet[8486]: E1207 22:55:29.282311    8486 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="df055e75-dc3b-436d-9b15-60ec788da8a6"
	Dec 07 22:55:32 functional-442811 kubelet[8486]: E1207 22:55:32.280249    8486 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-9f67c86d4-gldsc" podUID="df13df8d-919f-420d-b0e4-4e5e489d4991"
	Dec 07 22:55:34 functional-442811 kubelet[8486]: E1207 22:55:34.280677    8486 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-5758569b79-6bwdx" podUID="bb6e8c71-aaa3-4a46-9946-a5ac8718a889"
	Dec 07 22:55:35 functional-442811 kubelet[8486]: E1207 22:55:35.279166    8486 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-chtwb" containerName="kubernetes-dashboard"
	Dec 07 22:55:35 functional-442811 kubelet[8486]: E1207 22:55:35.282123    8486 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-chtwb" podUID="04735f3d-d1e5-4bcc-b82f-7645afbc28fb"
	Dec 07 22:55:36 functional-442811 kubelet[8486]: E1207 22:55:36.279670    8486 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="a0787535-37fd-46f0-bd6d-f603d1557ee2"
	Dec 07 22:55:38 functional-442811 kubelet[8486]: E1207 22:55:38.279898    8486 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-442811" containerName="kube-controller-manager"
	Dec 07 22:55:39 functional-442811 kubelet[8486]: E1207 22:55:39.279481    8486 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-442811" containerName="kube-scheduler"
	Dec 07 22:55:40 functional-442811 kubelet[8486]: E1207 22:55:40.281854    8486 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-844cf969f6-zm2lh" podUID="e691e40e-e4d5-4b7e-b852-ca016ddf9542"
	Dec 07 22:55:40 functional-442811 kubelet[8486]: E1207 22:55:40.282205    8486 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="df055e75-dc3b-436d-9b15-60ec788da8a6"
	Dec 07 22:55:42 functional-442811 kubelet[8486]: E1207 22:55:42.279782    8486 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-j4t8w" containerName="coredns"
	Dec 07 22:55:43 functional-442811 kubelet[8486]: E1207 22:55:43.279001    8486 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-6kf2d" containerName="dashboard-metrics-scraper"
	Dec 07 22:55:43 functional-442811 kubelet[8486]: E1207 22:55:43.281702    8486 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-6kf2d" podUID="5a9529c4-8a48-414f-8791-063053747e2e"
	Dec 07 22:55:47 functional-442811 kubelet[8486]: E1207 22:55:47.280502    8486 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-9f67c86d4-gldsc" podUID="df13df8d-919f-420d-b0e4-4e5e489d4991"
	Dec 07 22:55:48 functional-442811 kubelet[8486]: E1207 22:55:48.279877    8486 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-chtwb" containerName="kubernetes-dashboard"
	Dec 07 22:55:48 functional-442811 kubelet[8486]: E1207 22:55:48.282443    8486 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-chtwb" podUID="04735f3d-d1e5-4bcc-b82f-7645afbc28fb"
	Dec 07 22:55:49 functional-442811 kubelet[8486]: E1207 22:55:49.280308    8486 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-5758569b79-6bwdx" podUID="bb6e8c71-aaa3-4a46-9946-a5ac8718a889"
	Dec 07 22:55:51 functional-442811 kubelet[8486]: E1207 22:55:51.280253    8486 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="a0787535-37fd-46f0-bd6d-f603d1557ee2"
	Dec 07 22:55:53 functional-442811 kubelet[8486]: E1207 22:55:53.282005    8486 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-844cf969f6-zm2lh" podUID="e691e40e-e4d5-4b7e-b852-ca016ddf9542"
	Dec 07 22:55:55 functional-442811 kubelet[8486]: E1207 22:55:55.282217    8486 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="df055e75-dc3b-436d-9b15-60ec788da8a6"
	Dec 07 22:55:56 functional-442811 kubelet[8486]: E1207 22:55:56.279121    8486 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-6kf2d" containerName="dashboard-metrics-scraper"
	Dec 07 22:55:56 functional-442811 kubelet[8486]: E1207 22:55:56.281731    8486 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-6kf2d" podUID="5a9529c4-8a48-414f-8791-063053747e2e"
	
	
	==> storage-provisioner [0103126cf1fa] <==
	I1207 22:44:50.702907       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1207 22:44:50.702947       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1207 22:44:50.705054       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:44:54.159918       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:44:58.420300       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:02.019059       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:05.072830       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:08.094990       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:08.100139       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1207 22:45:08.100304       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1207 22:45:08.100474       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-442811_7469dd5e-c834-4569-8c4d-488a475d8a7b!
	I1207 22:45:08.100445       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6744592d-205c-493c-9eed-33025935219a", APIVersion:"v1", ResourceVersion:"558", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-442811_7469dd5e-c834-4569-8c4d-488a475d8a7b became leader
	W1207 22:45:08.102399       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:08.105316       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1207 22:45:08.200705       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-442811_7469dd5e-c834-4569-8c4d-488a475d8a7b!
	W1207 22:45:10.108971       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:10.113858       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:12.117456       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:12.121527       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:14.124245       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:14.128190       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:16.131409       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:16.136192       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:18.138850       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:18.142809       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [88ac0a9f9519] <==
	W1207 22:55:33.503827       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:55:35.507272       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:55:35.512494       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:55:37.515807       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:55:37.519868       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:55:39.523393       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:55:39.528704       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:55:41.531737       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:55:41.535576       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:55:43.538781       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:55:43.542359       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:55:45.546018       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:55:45.550981       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:55:47.554041       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:55:47.559155       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:55:49.562664       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:55:49.566584       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:55:51.570049       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:55:51.573747       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:55:53.576775       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:55:53.580414       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:55:55.583790       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:55:55.588792       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:55:57.592079       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:55:57.597527       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-442811 -n functional-442811
helpers_test.go:269: (dbg) Run:  kubectl --context functional-442811 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-5758569b79-6bwdx hello-node-connect-9f67c86d4-gldsc mysql-844cf969f6-zm2lh nginx-svc sp-pod dashboard-metrics-scraper-5565989548-6kf2d kubernetes-dashboard-b84665fb8-chtwb
helpers_test.go:282: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-442811 describe pod busybox-mount hello-node-5758569b79-6bwdx hello-node-connect-9f67c86d4-gldsc mysql-844cf969f6-zm2lh nginx-svc sp-pod dashboard-metrics-scraper-5565989548-6kf2d kubernetes-dashboard-b84665fb8-chtwb
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-442811 describe pod busybox-mount hello-node-5758569b79-6bwdx hello-node-connect-9f67c86d4-gldsc mysql-844cf969f6-zm2lh nginx-svc sp-pod dashboard-metrics-scraper-5565989548-6kf2d kubernetes-dashboard-b84665fb8-chtwb: exit status 1 (115.060365ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-442811/192.168.49.2
	Start Time:       Sun, 07 Dec 2025 22:52:04 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.13
	IPs:
	  IP:  10.244.0.13
	Containers:
	  mount-munger:
	    Container ID:  docker://e878acdb795753bebae0d3951e3f7b095f3224bc0a9d688b6e8a2b128fd36dac
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sun, 07 Dec 2025 22:52:06 +0000
	      Finished:     Sun, 07 Dec 2025 22:52:06 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fxvhm (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-fxvhm:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  3m55s  default-scheduler  Successfully assigned default/busybox-mount to functional-442811
	  Normal  Pulling    3m55s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     3m53s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.967s (1.967s including waiting). Image size: 4403845 bytes.
	  Normal  Created    3m53s  kubelet            Container created
	  Normal  Started    3m53s  kubelet            Container started
	
	
	Name:             hello-node-5758569b79-6bwdx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-442811/192.168.49.2
	Start Time:       Sun, 07 Dec 2025 22:51:33 +0000
	Labels:           app=hello-node
	                  pod-template-hash=5758569b79
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.12
	IPs:
	  IP:           10.244.0.12
	Controlled By:  ReplicaSet/hello-node-5758569b79
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vq8rd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-vq8rd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  4m25s                 default-scheduler  Successfully assigned default/hello-node-5758569b79-6bwdx to functional-442811
	  Normal   Pulling    80s (x5 over 4m25s)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     79s (x5 over 4m24s)   kubelet            Failed to pull image "kicbase/echo-server": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     79s (x5 over 4m24s)   kubelet            Error: ErrImagePull
	  Warning  Failed     25s (x15 over 4m23s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    10s (x16 over 4m23s)  kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-9f67c86d4-gldsc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-442811/192.168.49.2
	Start Time:       Sun, 07 Dec 2025 22:45:57 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=9f67c86d4
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:           10.244.0.10
	Controlled By:  ReplicaSet/hello-node-connect-9f67c86d4
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dh5vh (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-dh5vh:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-9f67c86d4-gldsc to functional-442811
	  Normal   Pulling    7m8s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m7s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     7m7s (x5 over 10m)    kubelet            Error: ErrImagePull
	  Warning  Failed     4m59s (x20 over 10m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m46s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             mysql-844cf969f6-zm2lh
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-442811/192.168.49.2
	Start Time:       Sun, 07 Dec 2025 22:45:51 +0000
	Labels:           app=mysql
	                  pod-template-hash=844cf969f6
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/mysql-844cf969f6
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-48z94 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-48z94:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/mysql-844cf969f6-zm2lh to functional-442811
	  Warning  Failed     10m                    kubelet            Failed to pull image "docker.io/mysql:5.7": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    7m18s (x5 over 10m)    kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     7m17s (x5 over 10m)    kubelet            Error: ErrImagePull
	  Warning  Failed     7m17s (x4 over 9m51s)  kubelet            Failed to pull image "docker.io/mysql:5.7": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     5m1s (x20 over 10m)    kubelet            Error: ImagePullBackOff
	  Normal   BackOff    6s (x41 over 10m)      kubelet            Back-off pulling image "docker.io/mysql:5.7"
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-442811/192.168.49.2
	Start Time:       Sun, 07 Dec 2025 22:45:54 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-r8xnp (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-r8xnp:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/nginx-svc to functional-442811
	  Warning  Failed     8m31s                 kubelet            Failed to pull image "docker.io/nginx:alpine": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    6m59s (x5 over 10m)   kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     6m58s (x4 over 10m)   kubelet            Failed to pull image "docker.io/nginx:alpine": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     6m58s (x5 over 10m)   kubelet            Error: ErrImagePull
	  Warning  Failed     4m53s (x20 over 10m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4s (x42 over 10m)     kubelet            Back-off pulling image "docker.io/nginx:alpine"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-442811/192.168.49.2
	Start Time:       Sun, 07 Dec 2025 22:45:59 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vt2bl (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-vt2bl:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/sp-pod to functional-442811
	  Normal   Pulling    7m12s (x5 over 10m)     kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     7m11s (x5 over 9m59s)   kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     7m11s (x5 over 9m59s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m57s (x21 over 9m59s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     4m57s (x21 over 9m59s)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-5565989548-6kf2d" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-chtwb" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-442811 describe pod busybox-mount hello-node-5758569b79-6bwdx hello-node-connect-9f67c86d4-gldsc mysql-844cf969f6-zm2lh nginx-svc sp-pod dashboard-metrics-scraper-5565989548-6kf2d kubernetes-dashboard-b84665fb8-chtwb: exit status 1
E1207 22:57:04.468332  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/addons-549698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 22:57:49.706217  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/functional-304107/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (602.66s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (367.68s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [d491f99b-c404-41df-9fe9-00ede52c989f] Running
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004048324s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-442811 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-442811 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-442811 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-442811 apply -f testdata/storage-provisioner/pod.yaml
I1207 22:45:59.071457  397166 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [a0787535-37fd-46f0-bd6d-f603d1557ee2] Pending
helpers_test.go:352: "sp-pod" [a0787535-37fd-46f0-bd6d-f603d1557ee2] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E1207 22:47:04.468442  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/addons-549698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 22:47:49.705653  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/functional-304107/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 22:47:49.712090  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/functional-304107/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 22:47:49.723469  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/functional-304107/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 22:47:49.744882  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/functional-304107/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 22:47:49.786327  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/functional-304107/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 22:47:49.868199  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/functional-304107/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 22:47:50.029738  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/functional-304107/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 22:47:50.351736  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/functional-304107/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 22:47:50.993449  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/functional-304107/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 22:47:52.275519  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/functional-304107/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 22:47:54.836962  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/functional-304107/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 22:47:59.958743  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/functional-304107/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 22:48:10.200789  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/functional-304107/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 22:48:30.682572  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/functional-304107/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 22:49:11.644515  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/functional-304107/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "default" "test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:140: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 6m0s: context deadline exceeded ****
functional_test_pvc_test.go:140: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-442811 -n functional-442811
functional_test_pvc_test.go:140: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2025-12-07 22:51:59.394552437 +0000 UTC m=+1349.680605015
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-442811 describe po sp-pod -n default
functional_test_pvc_test.go:140: (dbg) kubectl --context functional-442811 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-442811/192.168.49.2
Start Time:       Sun, 07 Dec 2025 22:45:59 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.11
IPs:
IP:  10.244.0.11
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ErrImagePull
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vt2bl (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-vt2bl:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  6m                     default-scheduler  Successfully assigned default/sp-pod to functional-442811
Normal   Pulling    3m12s (x5 over 6m)     kubelet            Pulling image "docker.io/nginx"
Warning  Failed     3m11s (x5 over 5m59s)  kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     3m11s (x5 over 5m59s)  kubelet            Error: ErrImagePull
Normal   BackOff    57s (x21 over 5m59s)   kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     57s (x21 over 5m59s)   kubelet            Error: ImagePullBackOff
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-442811 logs sp-pod -n default
functional_test_pvc_test.go:140: (dbg) Non-zero exit: kubectl --context functional-442811 logs sp-pod -n default: exit status 1 (67.632482ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: image can't be pulled

                                                
                                                
** /stderr **
functional_test_pvc_test.go:140: kubectl --context functional-442811 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:141: failed waiting for pvctest pod : test=storage-provisioner within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-442811
helpers_test.go:243: (dbg) docker inspect functional-442811:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1699178f8a7c3755a5daa3ce9dad5f01448b5919784eccd2fc40c26ff389c762",
	        "Created": "2025-12-07T22:43:22.049081307Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 456864,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-07T22:43:22.089713066Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
	        "ResolvConfPath": "/var/lib/docker/containers/1699178f8a7c3755a5daa3ce9dad5f01448b5919784eccd2fc40c26ff389c762/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1699178f8a7c3755a5daa3ce9dad5f01448b5919784eccd2fc40c26ff389c762/hostname",
	        "HostsPath": "/var/lib/docker/containers/1699178f8a7c3755a5daa3ce9dad5f01448b5919784eccd2fc40c26ff389c762/hosts",
	        "LogPath": "/var/lib/docker/containers/1699178f8a7c3755a5daa3ce9dad5f01448b5919784eccd2fc40c26ff389c762/1699178f8a7c3755a5daa3ce9dad5f01448b5919784eccd2fc40c26ff389c762-json.log",
	        "Name": "/functional-442811",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-442811:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-442811",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1699178f8a7c3755a5daa3ce9dad5f01448b5919784eccd2fc40c26ff389c762",
	                "LowerDir": "/var/lib/docker/overlay2/5d7c45936f6119213b3285a7f6a06509ba6a63e767162da1ee6f414e72615470-init/diff:/var/lib/docker/overlay2/72e2c0d34d3438044c6ca8754190358557351efc0aeb527bd1060ce52e748152/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5d7c45936f6119213b3285a7f6a06509ba6a63e767162da1ee6f414e72615470/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5d7c45936f6119213b3285a7f6a06509ba6a63e767162da1ee6f414e72615470/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5d7c45936f6119213b3285a7f6a06509ba6a63e767162da1ee6f414e72615470/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-442811",
	                "Source": "/var/lib/docker/volumes/functional-442811/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-442811",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-442811",
	                "name.minikube.sigs.k8s.io": "functional-442811",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "31fe22828acd889527928fad9cf9de4644c6693bf1715496a16bc2b07706d2c3",
	            "SandboxKey": "/var/run/docker/netns/31fe22828acd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33167"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33168"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33171"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33169"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33170"
	                    }
	                ]
	            },
	            "Networks": {
	                "functional-442811": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0b711061d8c4c0d449da65ff00c005cf89c83f72d15bf795a0f752ebfb4033e6",
	                    "EndpointID": "056c3331b34f02b412809772934c779a89252a7366e762ac002e00a13fc17922",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "96:63:b1:92:b7:e8",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-442811",
	                        "1699178f8a7c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-442811 -n functional-442811
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-442811 logs -n 25: (1.007559778s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                            ARGS                                                                             │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-442811 ssh sudo cat /etc/ssl/certs/3971662.pem                                                                                                   │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:45 UTC │ 07 Dec 25 22:45 UTC │
	│ ssh     │ functional-442811 ssh -n functional-442811 sudo cat /home/docker/cp-test.txt                                                                                │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:45 UTC │ 07 Dec 25 22:45 UTC │
	│ ssh     │ functional-442811 ssh sudo cat /usr/share/ca-certificates/3971662.pem                                                                                       │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:45 UTC │ 07 Dec 25 22:45 UTC │
	│ image   │ functional-442811 image ls                                                                                                                                  │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:45 UTC │ 07 Dec 25 22:45 UTC │
	│ cp      │ functional-442811 cp functional-442811:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp757346297/001/cp-test.txt │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:45 UTC │ 07 Dec 25 22:45 UTC │
	│ ssh     │ functional-442811 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                    │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:45 UTC │ 07 Dec 25 22:45 UTC │
	│ image   │ functional-442811 image load --daemon kicbase/echo-server:functional-442811 --alsologtostderr                                                               │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:45 UTC │ 07 Dec 25 22:45 UTC │
	│ ssh     │ functional-442811 ssh echo hello                                                                                                                            │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:45 UTC │ 07 Dec 25 22:45 UTC │
	│ cp      │ functional-442811 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                                                   │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:45 UTC │ 07 Dec 25 22:45 UTC │
	│ ssh     │ functional-442811 ssh cat /etc/hostname                                                                                                                     │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:45 UTC │ 07 Dec 25 22:45 UTC │
	│ image   │ functional-442811 image ls                                                                                                                                  │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:45 UTC │ 07 Dec 25 22:45 UTC │
	│ ssh     │ functional-442811 ssh -n functional-442811 sudo cat /tmp/does/not/exist/cp-test.txt                                                                         │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:45 UTC │ 07 Dec 25 22:45 UTC │
	│ tunnel  │ functional-442811 tunnel --alsologtostderr                                                                                                                  │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:45 UTC │                     │
	│ tunnel  │ functional-442811 tunnel --alsologtostderr                                                                                                                  │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:45 UTC │                     │
	│ tunnel  │ functional-442811 tunnel --alsologtostderr                                                                                                                  │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:45 UTC │                     │
	│ image   │ functional-442811 image load --daemon kicbase/echo-server:functional-442811 --alsologtostderr                                                               │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:45 UTC │ 07 Dec 25 22:45 UTC │
	│ image   │ functional-442811 image ls                                                                                                                                  │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:45 UTC │ 07 Dec 25 22:45 UTC │
	│ image   │ functional-442811 image save kicbase/echo-server:functional-442811 /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr  │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:45 UTC │ 07 Dec 25 22:45 UTC │
	│ image   │ functional-442811 image rm kicbase/echo-server:functional-442811 --alsologtostderr                                                                          │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:45 UTC │ 07 Dec 25 22:45 UTC │
	│ image   │ functional-442811 image ls                                                                                                                                  │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:45 UTC │ 07 Dec 25 22:45 UTC │
	│ image   │ functional-442811 image load /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr                                        │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:45 UTC │ 07 Dec 25 22:45 UTC │
	│ image   │ functional-442811 image ls                                                                                                                                  │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:45 UTC │ 07 Dec 25 22:45 UTC │
	│ image   │ functional-442811 image save --daemon kicbase/echo-server:functional-442811 --alsologtostderr                                                               │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:45 UTC │ 07 Dec 25 22:45 UTC │
	│ addons  │ functional-442811 addons list                                                                                                                               │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:45 UTC │ 07 Dec 25 22:45 UTC │
	│ addons  │ functional-442811 addons list -o json                                                                                                                       │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:45 UTC │ 07 Dec 25 22:45 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/07 22:45:03
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1207 22:45:03.022833  465303 out.go:360] Setting OutFile to fd 1 ...
	I1207 22:45:03.023109  465303 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:45:03.023113  465303 out.go:374] Setting ErrFile to fd 2...
	I1207 22:45:03.023116  465303 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:45:03.023343  465303 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-393577/.minikube/bin
	I1207 22:45:03.023844  465303 out.go:368] Setting JSON to false
	I1207 22:45:03.024969  465303 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":5246,"bootTime":1765142257,"procs":231,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 22:45:03.025029  465303 start.go:143] virtualization: kvm guest
	I1207 22:45:03.026954  465303 out.go:179] * [functional-442811] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1207 22:45:03.028326  465303 notify.go:221] Checking for updates...
	I1207 22:45:03.028336  465303 out.go:179]   - MINIKUBE_LOCATION=22054
	I1207 22:45:03.029497  465303 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 22:45:03.030754  465303 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-393577/kubeconfig
	I1207 22:45:03.032122  465303 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-393577/.minikube
	I1207 22:45:03.033299  465303 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1207 22:45:03.034510  465303 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 22:45:03.036142  465303 config.go:182] Loaded profile config "functional-442811": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1207 22:45:03.036236  465303 driver.go:422] Setting default libvirt URI to qemu:///system
	I1207 22:45:03.063513  465303 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1207 22:45:03.063629  465303 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 22:45:03.117132  465303 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:67 SystemTime:2025-12-07 22:45:03.107519365 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 22:45:03.117274  465303 docker.go:319] overlay module found
	I1207 22:45:03.119359  465303 out.go:179] * Using the docker driver based on existing profile
	I1207 22:45:03.120464  465303 start.go:309] selected driver: docker
	I1207 22:45:03.120472  465303 start.go:927] validating driver "docker" against &{Name:functional-442811 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-442811 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 22:45:03.120552  465303 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 22:45:03.120720  465303 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 22:45:03.179383  465303 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:67 SystemTime:2025-12-07 22:45:03.169687946 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 22:45:03.179981  465303 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 22:45:03.180007  465303 cni.go:84] Creating CNI manager for ""
	I1207 22:45:03.180065  465303 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1207 22:45:03.180121  465303 start.go:353] cluster config:
	{Name:functional-442811 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-442811 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 22:45:03.181713  465303 out.go:179] * Starting "functional-442811" primary control-plane node in "functional-442811" cluster
	I1207 22:45:03.182810  465303 cache.go:134] Beginning downloading kic base image for docker with docker
	I1207 22:45:03.184057  465303 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1207 22:45:03.185354  465303 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1207 22:45:03.185387  465303 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-393577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
	I1207 22:45:03.185394  465303 cache.go:65] Caching tarball of preloaded images
	I1207 22:45:03.185451  465303 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1207 22:45:03.185463  465303 preload.go:238] Found /home/jenkins/minikube-integration/22054-393577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1207 22:45:03.185469  465303 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on docker
	I1207 22:45:03.185562  465303 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/functional-442811/config.json ...
	I1207 22:45:03.205900  465303 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1207 22:45:03.205913  465303 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1207 22:45:03.205940  465303 cache.go:243] Successfully downloaded all kic artifacts
	I1207 22:45:03.205978  465303 start.go:360] acquireMachinesLock for functional-442811: {Name:mkf9789bf6dcbbb44c4e6eb89a2fd19362dfc7c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 22:45:03.206046  465303 start.go:364] duration metric: took 48.131µs to acquireMachinesLock for "functional-442811"
	I1207 22:45:03.206064  465303 start.go:96] Skipping create...Using existing machine configuration
	I1207 22:45:03.206069  465303 fix.go:54] fixHost starting: 
	I1207 22:45:03.206409  465303 cli_runner.go:164] Run: docker container inspect functional-442811 --format={{.State.Status}}
	I1207 22:45:03.224751  465303 fix.go:112] recreateIfNeeded on functional-442811: state=Running err=<nil>
	W1207 22:45:03.224788  465303 fix.go:138] unexpected machine state, will restart: <nil>
	I1207 22:45:03.226723  465303 out.go:252] * Updating the running docker "functional-442811" container ...
	I1207 22:45:03.226748  465303 machine.go:94] provisionDockerMachine start ...
	I1207 22:45:03.226819  465303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-442811
	I1207 22:45:03.244586  465303 main.go:143] libmachine: Using SSH client type: native
	I1207 22:45:03.244961  465303 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33167 <nil> <nil>}
	I1207 22:45:03.244971  465303 main.go:143] libmachine: About to run SSH command:
	hostname
	I1207 22:45:03.373685  465303 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-442811
	
	I1207 22:45:03.373705  465303 ubuntu.go:182] provisioning hostname "functional-442811"
	I1207 22:45:03.373766  465303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-442811
	I1207 22:45:03.393791  465303 main.go:143] libmachine: Using SSH client type: native
	I1207 22:45:03.393991  465303 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33167 <nil> <nil>}
	I1207 22:45:03.393998  465303 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-442811 && echo "functional-442811" | sudo tee /etc/hostname
	I1207 22:45:03.532502  465303 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-442811
	
	I1207 22:45:03.532614  465303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-442811
	I1207 22:45:03.551178  465303 main.go:143] libmachine: Using SSH client type: native
	I1207 22:45:03.551410  465303 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33167 <nil> <nil>}
	I1207 22:45:03.551420  465303 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-442811' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-442811/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-442811' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1207 22:45:03.679529  465303 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1207 22:45:03.679548  465303 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-393577/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-393577/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-393577/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-393577/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-393577/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-393577/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-393577/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-393577/.minikube}
	I1207 22:45:03.679589  465303 ubuntu.go:190] setting up certificates
	I1207 22:45:03.679614  465303 provision.go:84] configureAuth start
	I1207 22:45:03.679670  465303 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-442811
	I1207 22:45:03.697570  465303 provision.go:143] copyHostCerts
	I1207 22:45:03.697657  465303 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-393577/.minikube/ca.pem, removing ...
	I1207 22:45:03.697665  465303 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-393577/.minikube/ca.pem
	I1207 22:45:03.697734  465303 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-393577/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-393577/.minikube/ca.pem (1082 bytes)
	I1207 22:45:03.697842  465303 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-393577/.minikube/cert.pem, removing ...
	I1207 22:45:03.697846  465303 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-393577/.minikube/cert.pem
	I1207 22:45:03.697879  465303 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-393577/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-393577/.minikube/cert.pem (1123 bytes)
	I1207 22:45:03.697959  465303 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-393577/.minikube/key.pem, removing ...
	I1207 22:45:03.697963  465303 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-393577/.minikube/key.pem
	I1207 22:45:03.697987  465303 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-393577/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-393577/.minikube/key.pem (1679 bytes)
	I1207 22:45:03.698046  465303 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-393577/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-393577/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-393577/.minikube/certs/ca-key.pem org=jenkins.functional-442811 san=[127.0.0.1 192.168.49.2 functional-442811 localhost minikube]
	I1207 22:45:03.893568  465303 provision.go:177] copyRemoteCerts
	I1207 22:45:03.893629  465303 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1207 22:45:03.893665  465303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-442811
	I1207 22:45:03.911569  465303 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33167 SSHKeyPath:/home/jenkins/minikube-integration/22054-393577/.minikube/machines/functional-442811/id_rsa Username:docker}
	I1207 22:45:04.006224  465303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-393577/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1207 22:45:04.024937  465303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-393577/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1207 22:45:04.042816  465303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-393577/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1207 22:45:04.060563  465303 provision.go:87] duration metric: took 380.937164ms to configureAuth
	I1207 22:45:04.060610  465303 ubuntu.go:206] setting minikube options for container-runtime
	I1207 22:45:04.060803  465303 config.go:182] Loaded profile config "functional-442811": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1207 22:45:04.060851  465303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-442811
	I1207 22:45:04.079002  465303 main.go:143] libmachine: Using SSH client type: native
	I1207 22:45:04.079210  465303 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33167 <nil> <nil>}
	I1207 22:45:04.079216  465303 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1207 22:45:04.208779  465303 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1207 22:45:04.208794  465303 ubuntu.go:71] root file system type: overlay
	I1207 22:45:04.208919  465303 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1207 22:45:04.208975  465303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-442811
	I1207 22:45:04.227213  465303 main.go:143] libmachine: Using SSH client type: native
	I1207 22:45:04.227429  465303 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33167 <nil> <nil>}
	I1207 22:45:04.227482  465303 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1207 22:45:04.364723  465303 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1207 22:45:04.364797  465303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-442811
	I1207 22:45:04.384381  465303 main.go:143] libmachine: Using SSH client type: native
	I1207 22:45:04.384676  465303 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33167 <nil> <nil>}
	I1207 22:45:04.384693  465303 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1207 22:45:04.518770  465303 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1207 22:45:04.518787  465303 machine.go:97] duration metric: took 1.292032718s to provisionDockerMachine
	I1207 22:45:04.518800  465303 start.go:293] postStartSetup for "functional-442811" (driver="docker")
	I1207 22:45:04.518831  465303 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1207 22:45:04.518907  465303 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1207 22:45:04.518947  465303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-442811
	I1207 22:45:04.538342  465303 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33167 SSHKeyPath:/home/jenkins/minikube-integration/22054-393577/.minikube/machines/functional-442811/id_rsa Username:docker}
	I1207 22:45:04.633774  465303 ssh_runner.go:195] Run: cat /etc/os-release
	I1207 22:45:04.637249  465303 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1207 22:45:04.637265  465303 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1207 22:45:04.637273  465303 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-393577/.minikube/addons for local assets ...
	I1207 22:45:04.637318  465303 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-393577/.minikube/files for local assets ...
	I1207 22:45:04.637386  465303 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-393577/.minikube/files/etc/ssl/certs/3971662.pem -> 3971662.pem in /etc/ssl/certs
	I1207 22:45:04.637449  465303 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-393577/.minikube/files/etc/test/nested/copy/397166/hosts -> hosts in /etc/test/nested/copy/397166
	I1207 22:45:04.637476  465303 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/397166
	I1207 22:45:04.645149  465303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-393577/.minikube/files/etc/ssl/certs/3971662.pem --> /etc/ssl/certs/3971662.pem (1708 bytes)
	I1207 22:45:04.662681  465303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-393577/.minikube/files/etc/test/nested/copy/397166/hosts --> /etc/test/nested/copy/397166/hosts (40 bytes)
	I1207 22:45:04.680326  465303 start.go:296] duration metric: took 161.513197ms for postStartSetup
	I1207 22:45:04.680399  465303 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1207 22:45:04.680432  465303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-442811
	I1207 22:45:04.698937  465303 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33167 SSHKeyPath:/home/jenkins/minikube-integration/22054-393577/.minikube/machines/functional-442811/id_rsa Username:docker}
	I1207 22:45:04.791297  465303 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1207 22:45:04.796659  465303 fix.go:56] duration metric: took 1.590582777s for fixHost
	I1207 22:45:04.796681  465303 start.go:83] releasing machines lock for "functional-442811", held for 1.590627368s
	I1207 22:45:04.796756  465303 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-442811
	I1207 22:45:04.815411  465303 ssh_runner.go:195] Run: cat /version.json
	I1207 22:45:04.815446  465303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-442811
	I1207 22:45:04.815444  465303 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1207 22:45:04.815517  465303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-442811
	I1207 22:45:04.835045  465303 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33167 SSHKeyPath:/home/jenkins/minikube-integration/22054-393577/.minikube/machines/functional-442811/id_rsa Username:docker}
	I1207 22:45:04.835336  465303 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33167 SSHKeyPath:/home/jenkins/minikube-integration/22054-393577/.minikube/machines/functional-442811/id_rsa Username:docker}
	I1207 22:45:04.928068  465303 ssh_runner.go:195] Run: systemctl --version
	I1207 22:45:04.987704  465303 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1207 22:45:04.993185  465303 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1207 22:45:04.993241  465303 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1207 22:45:05.001440  465303 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1207 22:45:05.001468  465303 start.go:496] detecting cgroup driver to use...
	I1207 22:45:05.001505  465303 detect.go:190] detected "systemd" cgroup driver on host os
	I1207 22:45:05.001619  465303 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 22:45:05.016467  465303 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1207 22:45:05.026256  465303 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1207 22:45:05.035504  465303 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1207 22:45:05.035558  465303 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1207 22:45:05.044764  465303 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1207 22:45:05.053803  465303 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1207 22:45:05.063210  465303 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1207 22:45:05.072636  465303 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1207 22:45:05.081016  465303 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1207 22:45:05.090157  465303 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1207 22:45:05.099261  465303 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1207 22:45:05.108306  465303 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1207 22:45:05.116776  465303 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1207 22:45:05.124400  465303 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 22:45:05.238952  465303 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1207 22:45:05.338535  465303 start.go:496] detecting cgroup driver to use...
	I1207 22:45:05.338576  465303 detect.go:190] detected "systemd" cgroup driver on host os
	I1207 22:45:05.338652  465303 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1207 22:45:05.353017  465303 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1207 22:45:05.365855  465303 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1207 22:45:05.386144  465303 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1207 22:45:05.400275  465303 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1207 22:45:05.413351  465303 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 22:45:05.427763  465303 ssh_runner.go:195] Run: which cri-dockerd
	I1207 22:45:05.431553  465303 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1207 22:45:05.439671  465303 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1207 22:45:05.453132  465303 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1207 22:45:05.578695  465303 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1207 22:45:05.697651  465303 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I1207 22:45:05.697782  465303 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I1207 22:45:05.712483  465303 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1207 22:45:05.725041  465303 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 22:45:05.844160  465303 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1207 22:45:21.749377  465303 ssh_runner.go:235] Completed: sudo systemctl restart docker: (15.905186161s)
	I1207 22:45:21.749444  465303 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1207 22:45:21.765160  465303 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1207 22:45:21.782278  465303 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1207 22:45:21.817515  465303 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1207 22:45:21.833160  465303 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1207 22:45:21.940155  465303 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1207 22:45:22.034346  465303 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 22:45:22.125109  465303 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1207 22:45:22.145831  465303 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1207 22:45:22.158490  465303 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 22:45:22.251205  465303 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1207 22:45:22.334549  465303 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1207 22:45:22.348117  465303 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1207 22:45:22.348170  465303 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1207 22:45:22.352001  465303 start.go:564] Will wait 60s for crictl version
	I1207 22:45:22.352056  465303 ssh_runner.go:195] Run: which crictl
	I1207 22:45:22.355695  465303 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1207 22:45:22.379560  465303 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.2
	RuntimeApiVersion:  v1
	I1207 22:45:22.379646  465303 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1207 22:45:22.404260  465303 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1207 22:45:22.430675  465303 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.1.2 ...
	I1207 22:45:22.430739  465303 cli_runner.go:164] Run: docker network inspect functional-442811 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1207 22:45:22.448191  465303 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1207 22:45:22.454322  465303 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1207 22:45:22.455412  465303 kubeadm.go:884] updating cluster {Name:functional-442811 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-442811 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1207 22:45:22.455544  465303 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1207 22:45:22.455591  465303 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1207 22:45:22.476779  465303 docker.go:691] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-442811
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1207 22:45:22.476794  465303 docker.go:621] Images already preloaded, skipping extraction
	I1207 22:45:22.476842  465303 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1207 22:45:22.499646  465303 docker.go:691] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-442811
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1207 22:45:22.499662  465303 cache_images.go:86] Images are preloaded, skipping loading
	I1207 22:45:22.499670  465303 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 docker true true} ...
	I1207 22:45:22.499787  465303 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-442811 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-442811 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1207 22:45:22.499838  465303 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1207 22:45:22.552046  465303 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1207 22:45:22.552121  465303 cni.go:84] Creating CNI manager for ""
	I1207 22:45:22.552131  465303 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1207 22:45:22.552141  465303 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1207 22:45:22.552160  465303 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-442811 NodeName:functional-442811 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfi
gOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1207 22:45:22.552287  465303 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-442811"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1207 22:45:22.552339  465303 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1207 22:45:22.560611  465303 binaries.go:51] Found k8s binaries, skipping transfer
	I1207 22:45:22.560676  465303 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1207 22:45:22.568379  465303 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1207 22:45:22.581734  465303 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1207 22:45:22.595243  465303 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2074 bytes)
	I1207 22:45:22.608509  465303 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1207 22:45:22.612658  465303 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 22:45:22.705989  465303 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1207 22:45:22.718960  465303 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/functional-442811 for IP: 192.168.49.2
	I1207 22:45:22.718972  465303 certs.go:195] generating shared ca certs ...
	I1207 22:45:22.718986  465303 certs.go:227] acquiring lock for ca certs: {Name:mk24ef0f84330e45aa8811b13559d7d0a3d9418e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 22:45:22.719149  465303 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-393577/.minikube/ca.key
	I1207 22:45:22.719199  465303 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-393577/.minikube/proxy-client-ca.key
	I1207 22:45:22.719209  465303 certs.go:257] generating profile certs ...
	I1207 22:45:22.719288  465303 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/functional-442811/client.key
	I1207 22:45:22.719328  465303 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/functional-442811/apiserver.key.98241e4c
	I1207 22:45:22.719361  465303 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/functional-442811/proxy-client.key
	I1207 22:45:22.719463  465303 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-393577/.minikube/certs/397166.pem (1338 bytes)
	W1207 22:45:22.719491  465303 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-393577/.minikube/certs/397166_empty.pem, impossibly tiny 0 bytes
	I1207 22:45:22.719497  465303 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-393577/.minikube/certs/ca-key.pem (1675 bytes)
	I1207 22:45:22.719520  465303 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-393577/.minikube/certs/ca.pem (1082 bytes)
	I1207 22:45:22.719539  465303 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-393577/.minikube/certs/cert.pem (1123 bytes)
	I1207 22:45:22.719559  465303 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-393577/.minikube/certs/key.pem (1679 bytes)
	I1207 22:45:22.719613  465303 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-393577/.minikube/files/etc/ssl/certs/3971662.pem (1708 bytes)
	I1207 22:45:22.720178  465303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-393577/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1207 22:45:22.737872  465303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-393577/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1207 22:45:22.755042  465303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-393577/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1207 22:45:22.771997  465303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-393577/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1207 22:45:22.789257  465303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/functional-442811/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1207 22:45:22.806906  465303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/functional-442811/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1207 22:45:22.825663  465303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/functional-442811/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1207 22:45:22.849146  465303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/functional-442811/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1207 22:45:22.871174  465303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-393577/.minikube/files/etc/ssl/certs/3971662.pem --> /usr/share/ca-certificates/3971662.pem (1708 bytes)
	I1207 22:45:22.896837  465303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-393577/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1207 22:45:22.919779  465303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-393577/.minikube/certs/397166.pem --> /usr/share/ca-certificates/397166.pem (1338 bytes)
	I1207 22:45:22.942281  465303 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1207 22:45:22.958189  465303 ssh_runner.go:195] Run: openssl version
	I1207 22:45:22.966811  465303 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1207 22:45:22.975629  465303 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1207 22:45:22.985531  465303 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1207 22:45:22.989993  465303 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  7 22:30 /usr/share/ca-certificates/minikubeCA.pem
	I1207 22:45:22.990051  465303 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1207 22:45:23.028717  465303 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1207 22:45:23.036732  465303 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/397166.pem
	I1207 22:45:23.044388  465303 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/397166.pem /etc/ssl/certs/397166.pem
	I1207 22:45:23.053334  465303 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/397166.pem
	I1207 22:45:23.057747  465303 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  7 22:43 /usr/share/ca-certificates/397166.pem
	I1207 22:45:23.057804  465303 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/397166.pem
	I1207 22:45:23.095668  465303 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1207 22:45:23.104050  465303 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3971662.pem
	I1207 22:45:23.112059  465303 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3971662.pem /etc/ssl/certs/3971662.pem
	I1207 22:45:23.120838  465303 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3971662.pem
	I1207 22:45:23.124834  465303 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  7 22:43 /usr/share/ca-certificates/3971662.pem
	I1207 22:45:23.124880  465303 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3971662.pem
	I1207 22:45:23.158870  465303 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1207 22:45:23.166773  465303 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1207 22:45:23.170670  465303 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1207 22:45:23.205228  465303 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1207 22:45:23.239692  465303 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1207 22:45:23.273715  465303 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1207 22:45:23.308496  465303 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1207 22:45:23.343551  465303 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1207 22:45:23.378131  465303 kubeadm.go:401] StartCluster: {Name:functional-442811 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-442811 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 22:45:23.378298  465303 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1207 22:45:23.398498  465303 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1207 22:45:23.406713  465303 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1207 22:45:23.406723  465303 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1207 22:45:23.406762  465303 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1207 22:45:23.414296  465303 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1207 22:45:23.414913  465303 kubeconfig.go:125] found "functional-442811" server: "https://192.168.49.2:8441"
	I1207 22:45:23.416632  465303 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1207 22:45:23.424379  465303 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-07 22:43:28.471524970 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-07 22:45:22.605652135 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1207 22:45:23.424387  465303 kubeadm.go:1161] stopping kube-system containers ...
	I1207 22:45:23.424438  465303 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1207 22:45:23.446907  465303 docker.go:484] Stopping containers: [9a8f25fc3ab8 ea9ee1082fcc 74420559639f a68438562a81 0103126cf1fa 4e35a7433174 8ccc720180cb 00da6fd104bc 79247fb2be94 ff1925c035bb c731d93d4631 c42fbed28717 35d62fd2758e 75e5fd6f2ed6 8e814ef6acfa fb9985bca5bd 18663e96ba84 eda0c18c12a7 bce247a3b927 3aba4968e0d8 c33e83cfa2ea 8dd7e50d6008 a454b4fe5066 042859710b5e 7964caa7427f fcf1793b9e22 d07e42d58059]
	I1207 22:45:23.446985  465303 ssh_runner.go:195] Run: docker stop 9a8f25fc3ab8 ea9ee1082fcc 74420559639f a68438562a81 0103126cf1fa 4e35a7433174 8ccc720180cb 00da6fd104bc 79247fb2be94 ff1925c035bb c731d93d4631 c42fbed28717 35d62fd2758e 75e5fd6f2ed6 8e814ef6acfa fb9985bca5bd 18663e96ba84 eda0c18c12a7 bce247a3b927 3aba4968e0d8 c33e83cfa2ea 8dd7e50d6008 a454b4fe5066 042859710b5e 7964caa7427f fcf1793b9e22 d07e42d58059
	I1207 22:45:23.538413  465303 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1207 22:45:23.579554  465303 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1207 22:45:23.588178  465303 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec  7 22:43 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Dec  7 22:43 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1972 Dec  7 22:43 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Dec  7 22:43 /etc/kubernetes/scheduler.conf
	
	I1207 22:45:23.588225  465303 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1207 22:45:23.596322  465303 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1207 22:45:23.604040  465303 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1207 22:45:23.604094  465303 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1207 22:45:23.611621  465303 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1207 22:45:23.619220  465303 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1207 22:45:23.619294  465303 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1207 22:45:23.627256  465303 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1207 22:45:23.634927  465303 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1207 22:45:23.634983  465303 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1207 22:45:23.642352  465303 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1207 22:45:23.650295  465303 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 22:45:23.691756  465303 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 22:45:24.003477  465303 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1207 22:45:24.179003  465303 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 22:45:24.230616  465303 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1207 22:45:24.293148  465303 api_server.go:52] waiting for apiserver process to appear ...
	I1207 22:45:24.293218  465303 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 22:45:24.793705  465303 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 22:45:25.293360  465303 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 22:45:25.305169  465303 api_server.go:72] duration metric: took 1.012033592s to wait for apiserver process to appear ...
	I1207 22:45:25.305196  465303 api_server.go:88] waiting for apiserver healthz status ...
	I1207 22:45:25.305220  465303 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1207 22:45:27.044966  465303 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W1207 22:45:27.044988  465303 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I1207 22:45:27.045002  465303 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1207 22:45:27.051248  465303 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[-]poststarthook/start-kube-apiserver-identity-lease-controller failed: reason withheld
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1207 22:45:27.051269  465303 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[-]poststarthook/start-kube-apiserver-identity-lease-controller failed: reason withheld
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1207 22:45:27.305665  465303 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1207 22:45:27.311903  465303 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1207 22:45:27.311922  465303 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1207 22:45:27.806091  465303 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1207 22:45:27.810728  465303 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1207 22:45:27.810749  465303 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1207 22:45:28.305319  465303 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1207 22:45:28.310176  465303 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1207 22:45:28.316941  465303 api_server.go:141] control plane version: v1.35.0-beta.0
	I1207 22:45:28.316959  465303 api_server.go:131] duration metric: took 3.011758995s to wait for apiserver health ...
	I1207 22:45:28.316968  465303 cni.go:84] Creating CNI manager for ""
	I1207 22:45:28.316978  465303 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1207 22:45:28.318930  465303 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1207 22:45:28.320263  465303 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1207 22:45:28.328710  465303 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (494 bytes)
	I1207 22:45:28.341472  465303 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 22:45:28.345206  465303 system_pods.go:59] 7 kube-system pods found
	I1207 22:45:28.345232  465303 system_pods.go:61] "coredns-7d764666f9-j4t8w" [a2e36c52-bbf6-4d92-9d08-a98e923801cc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 22:45:28.345241  465303 system_pods.go:61] "etcd-functional-442811" [88a8f071-6fe4-4a21-8ef5-db5dec449a3d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1207 22:45:28.345245  465303 system_pods.go:61] "kube-apiserver-functional-442811" [652d4c1f-b079-43de-b9f2-2bb63a1792fb] Pending
	I1207 22:45:28.345253  465303 system_pods.go:61] "kube-controller-manager-functional-442811" [8792980f-102a-4d98-a5be-9145d3abe25a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1207 22:45:28.345258  465303 system_pods.go:61] "kube-proxy-d52sm" [96ea8ebe-72da-44ca-a3dc-f5fbfbfae18b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1207 22:45:28.345262  465303 system_pods.go:61] "kube-scheduler-functional-442811" [fe22b3c6-2b59-4b9d-9c96-4429c664a4aa] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1207 22:45:28.345267  465303 system_pods.go:61] "storage-provisioner" [d491f99b-c404-41df-9fe9-00ede52c989f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 22:45:28.345271  465303 system_pods.go:74] duration metric: took 3.788977ms to wait for pod list to return data ...
	I1207 22:45:28.345277  465303 node_conditions.go:102] verifying NodePressure condition ...
	I1207 22:45:28.347829  465303 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1207 22:45:28.347843  465303 node_conditions.go:123] node cpu capacity is 8
	I1207 22:45:28.347855  465303 node_conditions.go:105] duration metric: took 2.574417ms to run NodePressure ...
	I1207 22:45:28.347896  465303 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 22:45:28.593267  465303 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1207 22:45:28.595934  465303 kubeadm.go:744] kubelet initialised
	I1207 22:45:28.595944  465303 kubeadm.go:745] duration metric: took 2.663736ms waiting for restarted kubelet to initialise ...
	I1207 22:45:28.595963  465303 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1207 22:45:28.606292  465303 ops.go:34] apiserver oom_adj: -16
	I1207 22:45:28.606304  465303 kubeadm.go:602] duration metric: took 5.199576081s to restartPrimaryControlPlane
	I1207 22:45:28.606312  465303 kubeadm.go:403] duration metric: took 5.228234568s to StartCluster
	I1207 22:45:28.606330  465303 settings.go:142] acquiring lock: {Name:mk710cfcd71952e5d863e80b51b7aca50ad235a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 22:45:28.606409  465303 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22054-393577/kubeconfig
	I1207 22:45:28.606977  465303 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-393577/kubeconfig: {Name:mk66bad86c39e2ac08b2397c070713ca06539383 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 22:45:28.607194  465303 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1207 22:45:28.607262  465303 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1207 22:45:28.607344  465303 addons.go:70] Setting storage-provisioner=true in profile "functional-442811"
	I1207 22:45:28.607358  465303 addons.go:239] Setting addon storage-provisioner=true in "functional-442811"
	W1207 22:45:28.607365  465303 addons.go:248] addon storage-provisioner should already be in state true
	I1207 22:45:28.607378  465303 addons.go:70] Setting default-storageclass=true in profile "functional-442811"
	I1207 22:45:28.607391  465303 host.go:66] Checking if "functional-442811" exists ...
	I1207 22:45:28.607401  465303 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-442811"
	I1207 22:45:28.607449  465303 config.go:182] Loaded profile config "functional-442811": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1207 22:45:28.607706  465303 cli_runner.go:164] Run: docker container inspect functional-442811 --format={{.State.Status}}
	I1207 22:45:28.607796  465303 cli_runner.go:164] Run: docker container inspect functional-442811 --format={{.State.Status}}
	I1207 22:45:28.608712  465303 out.go:179] * Verifying Kubernetes components...
	I1207 22:45:28.610171  465303 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 22:45:28.627094  465303 addons.go:239] Setting addon default-storageclass=true in "functional-442811"
	W1207 22:45:28.627106  465303 addons.go:248] addon default-storageclass should already be in state true
	I1207 22:45:28.627129  465303 host.go:66] Checking if "functional-442811" exists ...
	I1207 22:45:28.627470  465303 cli_runner.go:164] Run: docker container inspect functional-442811 --format={{.State.Status}}
	I1207 22:45:28.628852  465303 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 22:45:28.630182  465303 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 22:45:28.630192  465303 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1207 22:45:28.630248  465303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-442811
	I1207 22:45:28.648336  465303 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1207 22:45:28.648353  465303 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1207 22:45:28.648414  465303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-442811
	I1207 22:45:28.656963  465303 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33167 SSHKeyPath:/home/jenkins/minikube-integration/22054-393577/.minikube/machines/functional-442811/id_rsa Username:docker}
	I1207 22:45:28.668644  465303 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33167 SSHKeyPath:/home/jenkins/minikube-integration/22054-393577/.minikube/machines/functional-442811/id_rsa Username:docker}
	I1207 22:45:28.736653  465303 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1207 22:45:28.749995  465303 node_ready.go:35] waiting up to 6m0s for node "functional-442811" to be "Ready" ...
	I1207 22:45:28.752775  465303 node_ready.go:49] node "functional-442811" is "Ready"
	I1207 22:45:28.752790  465303 node_ready.go:38] duration metric: took 2.759557ms for node "functional-442811" to be "Ready" ...
	I1207 22:45:28.752803  465303 api_server.go:52] waiting for apiserver process to appear ...
	I1207 22:45:28.752847  465303 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 22:45:28.757472  465303 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 22:45:28.765759  465303 api_server.go:72] duration metric: took 158.538023ms to wait for apiserver process to appear ...
	I1207 22:45:28.765775  465303 api_server.go:88] waiting for apiserver healthz status ...
	I1207 22:45:28.765796  465303 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1207 22:45:28.770393  465303 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1207 22:45:28.771113  465303 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1207 22:45:28.772100  465303 api_server.go:141] control plane version: v1.35.0-beta.0
	I1207 22:45:28.772114  465303 api_server.go:131] duration metric: took 6.333489ms to wait for apiserver health ...
	I1207 22:45:28.772121  465303 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 22:45:28.775271  465303 system_pods.go:59] 7 kube-system pods found
	I1207 22:45:28.775302  465303 system_pods.go:61] "coredns-7d764666f9-j4t8w" [a2e36c52-bbf6-4d92-9d08-a98e923801cc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 22:45:28.775310  465303 system_pods.go:61] "etcd-functional-442811" [88a8f071-6fe4-4a21-8ef5-db5dec449a3d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1207 22:45:28.775322  465303 system_pods.go:61] "kube-apiserver-functional-442811" [652d4c1f-b079-43de-b9f2-2bb63a1792fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1207 22:45:28.775329  465303 system_pods.go:61] "kube-controller-manager-functional-442811" [8792980f-102a-4d98-a5be-9145d3abe25a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1207 22:45:28.775335  465303 system_pods.go:61] "kube-proxy-d52sm" [96ea8ebe-72da-44ca-a3dc-f5fbfbfae18b] Running
	I1207 22:45:28.775342  465303 system_pods.go:61] "kube-scheduler-functional-442811" [fe22b3c6-2b59-4b9d-9c96-4429c664a4aa] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1207 22:45:28.775345  465303 system_pods.go:61] "storage-provisioner" [d491f99b-c404-41df-9fe9-00ede52c989f] Running
	I1207 22:45:28.775352  465303 system_pods.go:74] duration metric: took 3.225038ms to wait for pod list to return data ...
	I1207 22:45:28.775359  465303 default_sa.go:34] waiting for default service account to be created ...
	I1207 22:45:28.778399  465303 default_sa.go:45] found service account: "default"
	I1207 22:45:28.778413  465303 default_sa.go:55] duration metric: took 3.048168ms for default service account to be created ...
	I1207 22:45:28.778422  465303 system_pods.go:116] waiting for k8s-apps to be running ...
	I1207 22:45:28.781628  465303 system_pods.go:86] 7 kube-system pods found
	I1207 22:45:28.781652  465303 system_pods.go:89] "coredns-7d764666f9-j4t8w" [a2e36c52-bbf6-4d92-9d08-a98e923801cc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 22:45:28.781662  465303 system_pods.go:89] "etcd-functional-442811" [88a8f071-6fe4-4a21-8ef5-db5dec449a3d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1207 22:45:28.781671  465303 system_pods.go:89] "kube-apiserver-functional-442811" [652d4c1f-b079-43de-b9f2-2bb63a1792fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1207 22:45:28.781679  465303 system_pods.go:89] "kube-controller-manager-functional-442811" [8792980f-102a-4d98-a5be-9145d3abe25a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1207 22:45:28.781685  465303 system_pods.go:89] "kube-proxy-d52sm" [96ea8ebe-72da-44ca-a3dc-f5fbfbfae18b] Running
	I1207 22:45:28.781692  465303 system_pods.go:89] "kube-scheduler-functional-442811" [fe22b3c6-2b59-4b9d-9c96-4429c664a4aa] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1207 22:45:28.781696  465303 system_pods.go:89] "storage-provisioner" [d491f99b-c404-41df-9fe9-00ede52c989f] Running
	I1207 22:45:28.781703  465303 system_pods.go:126] duration metric: took 3.275374ms to wait for k8s-apps to be running ...
	I1207 22:45:28.781711  465303 system_svc.go:44] waiting for kubelet service to be running ....
	I1207 22:45:28.781767  465303 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 22:45:29.264400  465303 system_svc.go:56] duration metric: took 482.682084ms WaitForService to wait for kubelet
	I1207 22:45:29.264417  465303 kubeadm.go:587] duration metric: took 657.204104ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 22:45:29.264434  465303 node_conditions.go:102] verifying NodePressure condition ...
	I1207 22:45:29.266938  465303 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1207 22:45:29.266955  465303 node_conditions.go:123] node cpu capacity is 8
	I1207 22:45:29.266965  465303 node_conditions.go:105] duration metric: took 2.528198ms to run NodePressure ...
	I1207 22:45:29.266975  465303 start.go:242] waiting for startup goroutines ...
	I1207 22:45:29.271719  465303 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1207 22:45:29.272919  465303 addons.go:530] duration metric: took 665.659607ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1207 22:45:29.272956  465303 start.go:247] waiting for cluster config update ...
	I1207 22:45:29.272966  465303 start.go:256] writing updated cluster config ...
	I1207 22:45:29.273258  465303 ssh_runner.go:195] Run: rm -f paused
	I1207 22:45:29.277173  465303 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1207 22:45:29.279971  465303 pod_ready.go:83] waiting for pod "coredns-7d764666f9-j4t8w" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 22:45:31.285350  465303 pod_ready.go:94] pod "coredns-7d764666f9-j4t8w" is "Ready"
	I1207 22:45:31.285366  465303 pod_ready.go:86] duration metric: took 2.005385375s for pod "coredns-7d764666f9-j4t8w" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 22:45:31.287881  465303 pod_ready.go:83] waiting for pod "etcd-functional-442811" in "kube-system" namespace to be "Ready" or be gone ...
	W1207 22:45:33.293853  465303 pod_ready.go:104] pod "etcd-functional-442811" is not "Ready", error: <nil>
	W1207 22:45:35.793740  465303 pod_ready.go:104] pod "etcd-functional-442811" is not "Ready", error: <nil>
	W1207 22:45:38.293624  465303 pod_ready.go:104] pod "etcd-functional-442811" is not "Ready", error: <nil>
	W1207 22:45:40.793514  465303 pod_ready.go:104] pod "etcd-functional-442811" is not "Ready", error: <nil>
	I1207 22:45:42.292589  465303 pod_ready.go:94] pod "etcd-functional-442811" is "Ready"
	I1207 22:45:42.292628  465303 pod_ready.go:86] duration metric: took 11.004734555s for pod "etcd-functional-442811" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 22:45:42.294561  465303 pod_ready.go:83] waiting for pod "kube-apiserver-functional-442811" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 22:45:42.298315  465303 pod_ready.go:94] pod "kube-apiserver-functional-442811" is "Ready"
	I1207 22:45:42.298328  465303 pod_ready.go:86] duration metric: took 3.754528ms for pod "kube-apiserver-functional-442811" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 22:45:42.300161  465303 pod_ready.go:83] waiting for pod "kube-controller-manager-functional-442811" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 22:45:43.805675  465303 pod_ready.go:94] pod "kube-controller-manager-functional-442811" is "Ready"
	I1207 22:45:43.805693  465303 pod_ready.go:86] duration metric: took 1.505520344s for pod "kube-controller-manager-functional-442811" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 22:45:43.807683  465303 pod_ready.go:83] waiting for pod "kube-proxy-d52sm" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 22:45:43.811371  465303 pod_ready.go:94] pod "kube-proxy-d52sm" is "Ready"
	I1207 22:45:43.811388  465303 pod_ready.go:86] duration metric: took 3.69003ms for pod "kube-proxy-d52sm" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 22:45:43.891823  465303 pod_ready.go:83] waiting for pod "kube-scheduler-functional-442811" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 22:45:44.291116  465303 pod_ready.go:94] pod "kube-scheduler-functional-442811" is "Ready"
	I1207 22:45:44.291135  465303 pod_ready.go:86] duration metric: took 399.292224ms for pod "kube-scheduler-functional-442811" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 22:45:44.291145  465303 pod_ready.go:40] duration metric: took 15.013953609s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1207 22:45:44.343161  465303 start.go:625] kubectl: 1.34.2, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1207 22:45:44.345001  465303 out.go:179] * Done! kubectl is now configured to use "functional-442811" cluster and "default" namespace by default
	
	
	==> Docker <==
	Dec 07 22:46:10 functional-442811 dockerd[7407]: time="2025-12-07T22:46:10.269640046Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 07 22:46:12 functional-442811 dockerd[7407]: time="2025-12-07T22:46:12.269398845Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 07 22:46:15 functional-442811 dockerd[7407]: time="2025-12-07T22:46:15.267034045Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 07 22:46:33 functional-442811 dockerd[7407]: time="2025-12-07T22:46:33.268511526Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 07 22:46:36 functional-442811 dockerd[7407]: time="2025-12-07T22:46:36.270018741Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 07 22:46:37 functional-442811 dockerd[7407]: time="2025-12-07T22:46:37.277010960Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 07 22:46:40 functional-442811 dockerd[7407]: time="2025-12-07T22:46:40.271933236Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 07 22:47:14 functional-442811 dockerd[7407]: time="2025-12-07T22:47:14.272524207Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 07 22:47:19 functional-442811 dockerd[7407]: time="2025-12-07T22:47:19.264612367Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 07 22:47:21 functional-442811 dockerd[7407]: time="2025-12-07T22:47:21.270974654Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 07 22:47:28 functional-442811 dockerd[7407]: time="2025-12-07T22:47:28.569398442Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 07 22:47:28 functional-442811 cri-dockerd[7726]: time="2025-12-07T22:47:28Z" level=info msg="Stop pulling image docker.io/nginx:alpine: alpine: Pulling from library/nginx"
	Dec 07 22:48:42 functional-442811 dockerd[7407]: time="2025-12-07T22:48:42.272268090Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 07 22:48:48 functional-442811 dockerd[7407]: time="2025-12-07T22:48:48.266472601Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 07 22:48:52 functional-442811 dockerd[7407]: time="2025-12-07T22:48:52.276705275Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 07 22:49:01 functional-442811 dockerd[7407]: time="2025-12-07T22:49:01.262704036Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 07 22:51:25 functional-442811 dockerd[7407]: time="2025-12-07T22:51:25.553673455Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 07 22:51:25 functional-442811 cri-dockerd[7726]: time="2025-12-07T22:51:25Z" level=info msg="Stop pulling image docker.io/mysql:5.7: 5.7: Pulling from library/mysql"
	Dec 07 22:51:34 functional-442811 dockerd[7407]: time="2025-12-07T22:51:34.271464817Z" level=info msg="sbJoin: gwep4 ''->'', gwep6 ''->''" eid=7b2717513967 ep=k8s_POD_hello-node-5758569b79-6bwdx_default_bb6e8c71-aaa3-4a46-9946-a5ac8718a889_0 net=none nid=9dd2fc082379
	Dec 07 22:51:34 functional-442811 cri-dockerd[7726]: time="2025-12-07T22:51:34Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/55db6e6cade28d83a147b979db2343db3cf0b5568d1ee1079a6eb7b6409a4412/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local europe-west2-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Dec 07 22:51:35 functional-442811 dockerd[7407]: time="2025-12-07T22:51:35.348185586Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 07 22:51:38 functional-442811 dockerd[7407]: time="2025-12-07T22:51:38.283338388Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 07 22:51:39 functional-442811 dockerd[7407]: time="2025-12-07T22:51:39.276334180Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 07 22:51:45 functional-442811 dockerd[7407]: time="2025-12-07T22:51:45.263850720Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 07 22:51:49 functional-442811 dockerd[7407]: time="2025-12-07T22:51:49.274460624Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	1dbbaf0070f63       aa5e3ebc0dfed       6 minutes ago       Running             coredns                   2                   ab34a5f620355       coredns-7d764666f9-j4t8w                    kube-system
	88ac0a9f95196       6e38f40d628db       6 minutes ago       Running             storage-provisioner       3                   8edd26c4e4b1f       storage-provisioner                         kube-system
	1241f85066cfe       8a4ded35a3eb1       6 minutes ago       Running             kube-proxy                2                   f9a2027ed4081       kube-proxy-d52sm                            kube-system
	83add6b3124c5       a3e246e9556e9       6 minutes ago       Running             etcd                      2                   34b1626cbb491       etcd-functional-442811                      kube-system
	aeabb2a736d2e       7bb6219ddab95       6 minutes ago       Running             kube-scheduler            2                   582a63f870e0e       kube-scheduler-functional-442811            kube-system
	81b5af16f1153       45f3cc72d235f       6 minutes ago       Running             kube-controller-manager   2                   4405a0223b979       kube-controller-manager-functional-442811   kube-system
	c9ff972b6d4dd       aa9d02839d8de       6 minutes ago       Running             kube-apiserver            0                   f2eb5859b259b       kube-apiserver-functional-442811            kube-system
	0103126cf1fac       6e38f40d628db       7 minutes ago       Exited              storage-provisioner       2                   18663e96ba841       storage-provisioner                         kube-system
	4e35a7433174e       aa5e3ebc0dfed       7 minutes ago       Exited              coredns                   1                   c42fbed28717d       coredns-7d764666f9-j4t8w                    kube-system
	8ccc720180cb5       7bb6219ddab95       7 minutes ago       Exited              kube-scheduler            1                   fb9985bca5bd0       kube-scheduler-functional-442811            kube-system
	79247fb2be94f       45f3cc72d235f       7 minutes ago       Exited              kube-controller-manager   1                   8e814ef6acfa3       kube-controller-manager-functional-442811   kube-system
	ff1925c035bbc       a3e246e9556e9       7 minutes ago       Exited              etcd                      1                   35d62fd2758ef       etcd-functional-442811                      kube-system
	c731d93d46317       8a4ded35a3eb1       7 minutes ago       Exited              kube-proxy                1                   eda0c18c12a74       kube-proxy-d52sm                            kube-system
	
	
	==> coredns [1dbbaf0070f6] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:42546 - 63724 "HINFO IN 3795866867150485848.7821617772591164036. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.021891038s
	
	
	==> coredns [4e35a7433174] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:46129 - 64813 "HINFO IN 5819608903442519064.4892300565990545133. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.019403977s
	
	
	==> describe nodes <==
	Name:               functional-442811
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-442811
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5cff42f65f8043a145b28acc2164a21aaf35c47
	                    minikube.k8s.io/name=functional-442811
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_07T22_43_35_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 07 Dec 2025 22:43:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-442811
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 07 Dec 2025 22:51:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 07 Dec 2025 22:48:40 +0000   Sun, 07 Dec 2025 22:43:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 07 Dec 2025 22:48:40 +0000   Sun, 07 Dec 2025 22:43:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 07 Dec 2025 22:48:40 +0000   Sun, 07 Dec 2025 22:43:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 07 Dec 2025 22:48:40 +0000   Sun, 07 Dec 2025 22:43:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-442811
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                75fc37d3-ccae-49e7-9308-4a7688634355
	  Boot ID:                    10618540-d4ef-4c75-8cf1-8b1c0379fe5e
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://29.1.2
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-5758569b79-6bwdx                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  default                     hello-node-connect-9f67c86d4-gldsc           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m3s
	  default                     mysql-844cf969f6-zm2lh                       600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     6m9s
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m6s
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m1s
	  kube-system                 coredns-7d764666f9-j4t8w                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     8m20s
	  kube-system                 etcd-functional-442811                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         8m27s
	  kube-system                 kube-apiserver-functional-442811             250m (3%)     0 (0%)      0 (0%)           0 (0%)         6m33s
	  kube-system                 kube-controller-manager-functional-442811    200m (2%)     0 (0%)      0 (0%)           0 (0%)         8m26s
	  kube-system                 kube-proxy-d52sm                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m20s
	  kube-system                 kube-scheduler-functional-442811             100m (1%)     0 (0%)      0 (0%)           0 (0%)         8m26s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (16%)  700m (8%)
	  memory             682Mi (2%)   870Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  RegisteredNode  8m22s  node-controller  Node functional-442811 event: Registered Node functional-442811 in Controller
	  Normal  RegisteredNode  7m19s  node-controller  Node functional-442811 event: Registered Node functional-442811 in Controller
	  Normal  RegisteredNode  6m30s  node-controller  Node functional-442811 event: Registered Node functional-442811 in Controller
	
	
	==> dmesg <==
	[  +0.000025] ll header: 00000000: fe fa 5c 53 bf e3 96 63 b1 92 b7 e8 08 00
	[  +2.047884] IPv4: martian source 10.110.150.78 from 192.168.49.1, on dev br-0b711061d8c4
	[  +0.000024] ll header: 00000000: fe fa 5c 53 bf e3 96 63 b1 92 b7 e8 08 00
	[  +4.031738] IPv4: martian source 10.110.150.78 from 192.168.49.1, on dev br-0b711061d8c4
	[  +0.000022] ll header: 00000000: fe fa 5c 53 bf e3 96 63 b1 92 b7 e8 08 00
	[  +8.383561] IPv4: martian source 10.110.150.78 from 192.168.49.1, on dev br-0b711061d8c4
	[  +0.000008] ll header: 00000000: fe fa 5c 53 bf e3 96 63 b1 92 b7 e8 08 00
	[  +3.048952] IPv4: martian source 10.110.150.78 from 192.168.49.1, on dev br-0b711061d8c4
	[  +0.000008] ll header: 00000000: fe fa 5c 53 bf e3 96 63 b1 92 b7 e8 08 00
	[  +1.046793] IPv4: martian source 10.110.150.78 from 192.168.49.1, on dev br-0b711061d8c4
	[  +0.000008] ll header: 00000000: fe fa 5c 53 bf e3 96 63 b1 92 b7 e8 08 00
	[  +1.023934] IPv4: martian source 10.110.150.78 from 192.168.49.1, on dev br-0b711061d8c4
	[  +0.000022] ll header: 00000000: fe fa 5c 53 bf e3 96 63 b1 92 b7 e8 08 00
	[  +1.023938] IPv4: martian source 10.110.150.78 from 192.168.49.1, on dev br-0b711061d8c4
	[  +0.000007] ll header: 00000000: fe fa 5c 53 bf e3 96 63 b1 92 b7 e8 08 00
	[  +1.023928] IPv4: martian source 10.110.150.78 from 192.168.49.1, on dev br-0b711061d8c4
	[  +0.000023] ll header: 00000000: fe fa 5c 53 bf e3 96 63 b1 92 b7 e8 08 00
	[  +1.023939] IPv4: martian source 10.110.150.78 from 192.168.49.1, on dev br-0b711061d8c4
	[  +0.000008] ll header: 00000000: fe fa 5c 53 bf e3 96 63 b1 92 b7 e8 08 00
	[  +2.047870] IPv4: martian source 10.110.150.78 from 192.168.49.1, on dev br-0b711061d8c4
	[  +0.000008] ll header: 00000000: fe fa 5c 53 bf e3 96 63 b1 92 b7 e8 08 00
	[  +4.031775] IPv4: martian source 10.110.150.78 from 192.168.49.1, on dev br-0b711061d8c4
	[  +0.000024] ll header: 00000000: fe fa 5c 53 bf e3 96 63 b1 92 b7 e8 08 00
	[  +8.255538] IPv4: martian source 10.110.150.78 from 192.168.49.1, on dev br-0b711061d8c4
	[  +0.000025] ll header: 00000000: fe fa 5c 53 bf e3 96 63 b1 92 b7 e8 08 00
	
	
	==> etcd [83add6b3124c] <==
	{"level":"warn","ts":"2025-12-07T22:45:26.443524Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.456496Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.465229Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.471450Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.478374Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.484420Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.492052Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.498708Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.504875Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.511207Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.528806Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.541786Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.548777Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.554976Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.561284Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.568001Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.574155Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.580316Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.587757Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.594537Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.613389Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.619780Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.627493Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.633965Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.679295Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42392","server-name":"","error":"EOF"}
	
	
	==> etcd [ff1925c035bb] <==
	{"level":"warn","ts":"2025-12-07T22:44:37.422211Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.431053Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.437500Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.444765Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.450951Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.457308Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.464461Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.470302Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.476974Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.486685Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.492641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.499833Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.508638Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.515414Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.522831Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.529466Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.535768Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.541989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.548192Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.555143Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.580067Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.586763Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.593170Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.599554Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.644544Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49272","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 22:52:00 up  1:34,  0 user,  load average: 0.24, 0.25, 0.87
	Linux functional-442811 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [c9ff972b6d4d] <==
	I1207 22:45:27.110184       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1207 22:45:27.111404       1 shared_informer.go:377] "Caches are synced"
	I1207 22:45:27.111455       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1207 22:45:27.111467       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1207 22:45:27.112240       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1207 22:45:27.113785       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1207 22:45:27.113805       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1207 22:45:27.115232       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1207 22:45:27.116901       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1207 22:45:27.127481       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1207 22:45:27.451017       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1207 22:45:27.451017       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1207 22:45:28.013053       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1207 22:45:28.426205       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1207 22:45:28.460038       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1207 22:45:28.480994       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1207 22:45:28.487143       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1207 22:45:30.606652       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1207 22:45:30.653920       1 controller.go:667] quota admission added evaluator for: endpoints
	I1207 22:45:30.704912       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1207 22:45:46.659446       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.110.200.238"}
	I1207 22:45:51.565249       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.111.181.88"}
	I1207 22:45:54.076127       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.100.134.119"}
	I1207 22:45:57.145808       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.99.16.235"}
	I1207 22:51:33.902860       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.110.151.143"}
	
	
	==> kube-controller-manager [79247fb2be94] <==
	I1207 22:44:41.225515       1 shared_informer.go:377] "Caches are synced"
	I1207 22:44:41.225538       1 shared_informer.go:377] "Caches are synced"
	I1207 22:44:41.225327       1 shared_informer.go:377] "Caches are synced"
	I1207 22:44:41.225806       1 shared_informer.go:377] "Caches are synced"
	I1207 22:44:41.225845       1 shared_informer.go:377] "Caches are synced"
	I1207 22:44:41.225900       1 shared_informer.go:377] "Caches are synced"
	I1207 22:44:41.226080       1 shared_informer.go:377] "Caches are synced"
	I1207 22:44:41.226093       1 shared_informer.go:377] "Caches are synced"
	I1207 22:44:41.226237       1 shared_informer.go:377] "Caches are synced"
	I1207 22:44:41.226254       1 shared_informer.go:377] "Caches are synced"
	I1207 22:44:41.226316       1 shared_informer.go:377] "Caches are synced"
	I1207 22:44:41.226433       1 shared_informer.go:377] "Caches are synced"
	I1207 22:44:41.226705       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1207 22:44:41.226904       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="functional-442811"
	I1207 22:44:41.226954       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1207 22:44:41.227069       1 shared_informer.go:377] "Caches are synced"
	I1207 22:44:41.227104       1 shared_informer.go:377] "Caches are synced"
	I1207 22:44:41.227789       1 shared_informer.go:377] "Caches are synced"
	I1207 22:44:41.227880       1 shared_informer.go:377] "Caches are synced"
	I1207 22:44:41.227875       1 shared_informer.go:377] "Caches are synced"
	I1207 22:44:41.229278       1 shared_informer.go:377] "Caches are synced"
	I1207 22:44:41.322481       1 shared_informer.go:377] "Caches are synced"
	I1207 22:44:41.324646       1 shared_informer.go:377] "Caches are synced"
	I1207 22:44:41.324662       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1207 22:44:41.324666       1 garbagecollector.go:169] "Proceeding to collect garbage"
	
	
	==> kube-controller-manager [81b5af16f115] <==
	I1207 22:45:30.256805       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1207 22:45:30.257041       1 shared_informer.go:377] "Caches are synced"
	I1207 22:45:30.257071       1 shared_informer.go:377] "Caches are synced"
	I1207 22:45:30.257765       1 shared_informer.go:377] "Caches are synced"
	I1207 22:45:30.258292       1 shared_informer.go:377] "Caches are synced"
	I1207 22:45:30.258313       1 shared_informer.go:377] "Caches are synced"
	I1207 22:45:30.258361       1 shared_informer.go:377] "Caches are synced"
	I1207 22:45:30.258392       1 shared_informer.go:377] "Caches are synced"
	I1207 22:45:30.258433       1 range_allocator.go:177] "Sending events to api server"
	I1207 22:45:30.258518       1 shared_informer.go:377] "Caches are synced"
	I1207 22:45:30.258521       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1207 22:45:30.258698       1 shared_informer.go:370] "Waiting for caches to sync"
	I1207 22:45:30.258704       1 shared_informer.go:377] "Caches are synced"
	I1207 22:45:30.258623       1 shared_informer.go:377] "Caches are synced"
	I1207 22:45:30.258586       1 shared_informer.go:377] "Caches are synced"
	I1207 22:45:30.258614       1 shared_informer.go:377] "Caches are synced"
	I1207 22:45:30.258633       1 shared_informer.go:377] "Caches are synced"
	I1207 22:45:30.258627       1 shared_informer.go:377] "Caches are synced"
	I1207 22:45:30.258636       1 shared_informer.go:377] "Caches are synced"
	I1207 22:45:30.266934       1 shared_informer.go:370] "Waiting for caches to sync"
	I1207 22:45:30.266989       1 shared_informer.go:377] "Caches are synced"
	I1207 22:45:30.358127       1 shared_informer.go:377] "Caches are synced"
	I1207 22:45:30.358147       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1207 22:45:30.358154       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1207 22:45:30.367867       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [1241f85066cf] <==
	I1207 22:45:27.866139       1 server_linux.go:53] "Using iptables proxy"
	I1207 22:45:27.929383       1 shared_informer.go:370] "Waiting for caches to sync"
	I1207 22:45:28.029574       1 shared_informer.go:377] "Caches are synced"
	I1207 22:45:28.029634       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1207 22:45:28.029756       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1207 22:45:28.054488       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1207 22:45:28.054553       1 server_linux.go:136] "Using iptables Proxier"
	I1207 22:45:28.061247       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1207 22:45:28.061709       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1207 22:45:28.061749       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 22:45:28.063126       1 config.go:106] "Starting endpoint slice config controller"
	I1207 22:45:28.063159       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1207 22:45:28.063200       1 config.go:200] "Starting service config controller"
	I1207 22:45:28.063206       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1207 22:45:28.063199       1 config.go:403] "Starting serviceCIDR config controller"
	I1207 22:45:28.063226       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1207 22:45:28.063264       1 config.go:309] "Starting node config controller"
	I1207 22:45:28.063271       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1207 22:45:28.063278       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1207 22:45:28.163244       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1207 22:45:28.163313       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1207 22:45:28.163332       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [c731d93d4631] <==
	I1207 22:44:36.479925       1 server_linux.go:53] "Using iptables proxy"
	I1207 22:44:36.568255       1 shared_informer.go:370] "Waiting for caches to sync"
	I1207 22:44:38.168788       1 shared_informer.go:377] "Caches are synced"
	I1207 22:44:38.168826       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1207 22:44:38.169011       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1207 22:44:38.212997       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1207 22:44:38.213059       1 server_linux.go:136] "Using iptables Proxier"
	I1207 22:44:38.218592       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1207 22:44:38.219063       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1207 22:44:38.219101       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 22:44:38.220955       1 config.go:403] "Starting serviceCIDR config controller"
	I1207 22:44:38.220984       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1207 22:44:38.220986       1 config.go:106] "Starting endpoint slice config controller"
	I1207 22:44:38.221014       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1207 22:44:38.221097       1 config.go:309] "Starting node config controller"
	I1207 22:44:38.221103       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1207 22:44:38.221109       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1207 22:44:38.221482       1 config.go:200] "Starting service config controller"
	I1207 22:44:38.221666       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1207 22:44:38.321151       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1207 22:44:38.321157       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1207 22:44:38.322219       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [8ccc720180cb] <==
	I1207 22:44:36.923472       1 serving.go:386] Generated self-signed cert in-memory
	W1207 22:44:38.008074       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1207 22:44:38.008306       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1207 22:44:38.008328       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1207 22:44:38.008462       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1207 22:44:38.053370       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1207 22:44:38.053407       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 22:44:38.056294       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1207 22:44:38.056353       1 shared_informer.go:370] "Waiting for caches to sync"
	I1207 22:44:38.056369       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1207 22:44:38.056891       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1207 22:44:38.156767       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-scheduler [aeabb2a736d2] <==
	I1207 22:45:25.216680       1 serving.go:386] Generated self-signed cert in-memory
	W1207 22:45:27.046224       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1207 22:45:27.046264       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found, role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found]
	W1207 22:45:27.046277       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1207 22:45:27.046286       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1207 22:45:27.062035       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1207 22:45:27.062058       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 22:45:27.063812       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1207 22:45:27.063845       1 shared_informer.go:370] "Waiting for caches to sync"
	I1207 22:45:27.063924       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1207 22:45:27.064004       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1207 22:45:27.164533       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 07 22:51:36 functional-442811 kubelet[8486]: E1207 22:51:36.064938    8486 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-5758569b79-6bwdx" podUID="bb6e8c71-aaa3-4a46-9946-a5ac8718a889"
	Dec 07 22:51:38 functional-442811 kubelet[8486]: E1207 22:51:38.285586    8486 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Dec 07 22:51:38 functional-442811 kubelet[8486]: E1207 22:51:38.285652    8486 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Dec 07 22:51:38 functional-442811 kubelet[8486]: E1207 22:51:38.286009    8486 kuberuntime_manager.go:1664] "Unhandled Error" err="container myfrontend start failed in pod sp-pod_default(a0787535-37fd-46f0-bd6d-f603d1557ee2): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Dec 07 22:51:38 functional-442811 kubelet[8486]: E1207 22:51:38.286057    8486 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="a0787535-37fd-46f0-bd6d-f603d1557ee2"
	Dec 07 22:51:39 functional-442811 kubelet[8486]: E1207 22:51:39.278540    8486 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Dec 07 22:51:39 functional-442811 kubelet[8486]: E1207 22:51:39.278621    8486 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Dec 07 22:51:39 functional-442811 kubelet[8486]: E1207 22:51:39.278872    8486 kuberuntime_manager.go:1664] "Unhandled Error" err="container echo-server start failed in pod hello-node-connect-9f67c86d4-gldsc_default(df13df8d-919f-420d-b0e4-4e5e489d4991): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Dec 07 22:51:39 functional-442811 kubelet[8486]: E1207 22:51:39.278924    8486 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-9f67c86d4-gldsc" podUID="df13df8d-919f-420d-b0e4-4e5e489d4991"
	Dec 07 22:51:39 functional-442811 kubelet[8486]: E1207 22:51:39.281429    8486 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-844cf969f6-zm2lh" podUID="e691e40e-e4d5-4b7e-b852-ca016ddf9542"
	Dec 07 22:51:45 functional-442811 kubelet[8486]: E1207 22:51:45.266328    8486 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Dec 07 22:51:45 functional-442811 kubelet[8486]: E1207 22:51:45.266385    8486 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Dec 07 22:51:45 functional-442811 kubelet[8486]: E1207 22:51:45.266620    8486 kuberuntime_manager.go:1664] "Unhandled Error" err="container nginx start failed in pod nginx-svc_default(df055e75-dc3b-436d-9b15-60ec788da8a6): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Dec 07 22:51:45 functional-442811 kubelet[8486]: E1207 22:51:45.266668    8486 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="df055e75-dc3b-436d-9b15-60ec788da8a6"
	Dec 07 22:51:48 functional-442811 kubelet[8486]: E1207 22:51:48.285345    8486 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-442811" containerName="kube-controller-manager"
	Dec 07 22:51:49 functional-442811 kubelet[8486]: E1207 22:51:49.276703    8486 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Dec 07 22:51:49 functional-442811 kubelet[8486]: E1207 22:51:49.276770    8486 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Dec 07 22:51:49 functional-442811 kubelet[8486]: E1207 22:51:49.277030    8486 kuberuntime_manager.go:1664] "Unhandled Error" err="container echo-server start failed in pod hello-node-5758569b79-6bwdx_default(bb6e8c71-aaa3-4a46-9946-a5ac8718a889): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Dec 07 22:51:49 functional-442811 kubelet[8486]: E1207 22:51:49.277086    8486 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-5758569b79-6bwdx" podUID="bb6e8c71-aaa3-4a46-9946-a5ac8718a889"
	Dec 07 22:51:51 functional-442811 kubelet[8486]: E1207 22:51:51.279976    8486 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="a0787535-37fd-46f0-bd6d-f603d1557ee2"
	Dec 07 22:51:53 functional-442811 kubelet[8486]: E1207 22:51:53.282195    8486 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-844cf969f6-zm2lh" podUID="e691e40e-e4d5-4b7e-b852-ca016ddf9542"
	Dec 07 22:51:54 functional-442811 kubelet[8486]: E1207 22:51:54.280469    8486 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-9f67c86d4-gldsc" podUID="df13df8d-919f-420d-b0e4-4e5e489d4991"
	Dec 07 22:51:58 functional-442811 kubelet[8486]: E1207 22:51:58.282323    8486 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="df055e75-dc3b-436d-9b15-60ec788da8a6"
	Dec 07 22:51:59 functional-442811 kubelet[8486]: E1207 22:51:59.279080    8486 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-442811" containerName="etcd"
	Dec 07 22:52:00 functional-442811 kubelet[8486]: E1207 22:52:00.279828    8486 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-442811" containerName="kube-scheduler"
	
	
	==> storage-provisioner [0103126cf1fa] <==
	I1207 22:44:50.702907       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1207 22:44:50.702947       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1207 22:44:50.705054       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:44:54.159918       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:44:58.420300       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:02.019059       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:05.072830       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:08.094990       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:08.100139       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1207 22:45:08.100304       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1207 22:45:08.100474       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-442811_7469dd5e-c834-4569-8c4d-488a475d8a7b!
	I1207 22:45:08.100445       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6744592d-205c-493c-9eed-33025935219a", APIVersion:"v1", ResourceVersion:"558", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-442811_7469dd5e-c834-4569-8c4d-488a475d8a7b became leader
	W1207 22:45:08.102399       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:08.105316       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1207 22:45:08.200705       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-442811_7469dd5e-c834-4569-8c4d-488a475d8a7b!
	W1207 22:45:10.108971       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:10.113858       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:12.117456       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:12.121527       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:14.124245       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:14.128190       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:16.131409       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:16.136192       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:18.138850       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:18.142809       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [88ac0a9f9519] <==
	W1207 22:51:36.578402       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:51:38.582074       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:51:38.585916       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:51:40.589223       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:51:40.593083       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:51:42.595988       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:51:42.601127       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:51:44.604396       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:51:44.608358       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:51:46.611657       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:51:46.615451       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:51:48.618503       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:51:48.622070       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:51:50.625591       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:51:50.629721       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:51:52.632537       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:51:52.637454       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:51:54.640634       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:51:54.645497       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:51:56.648655       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:51:56.652276       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:51:58.657020       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:51:58.662012       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:52:00.666007       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:52:00.670057       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-442811 -n functional-442811
helpers_test.go:269: (dbg) Run:  kubectl --context functional-442811 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-node-5758569b79-6bwdx hello-node-connect-9f67c86d4-gldsc mysql-844cf969f6-zm2lh nginx-svc sp-pod
helpers_test.go:282: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-442811 describe pod hello-node-5758569b79-6bwdx hello-node-connect-9f67c86d4-gldsc mysql-844cf969f6-zm2lh nginx-svc sp-pod
helpers_test.go:290: (dbg) kubectl --context functional-442811 describe pod hello-node-5758569b79-6bwdx hello-node-connect-9f67c86d4-gldsc mysql-844cf969f6-zm2lh nginx-svc sp-pod:

                                                
                                                
-- stdout --
	Name:             hello-node-5758569b79-6bwdx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-442811/192.168.49.2
	Start Time:       Sun, 07 Dec 2025 22:51:33 +0000
	Labels:           app=hello-node
	                  pod-template-hash=5758569b79
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.12
	IPs:
	  IP:           10.244.0.12
	Controlled By:  ReplicaSet/hello-node-5758569b79
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vq8rd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-vq8rd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                From               Message
	  ----     ------     ----               ----               -------
	  Normal   Scheduled  27s                default-scheduler  Successfully assigned default/hello-node-5758569b79-6bwdx to functional-442811
	  Normal   BackOff    25s                kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     25s                kubelet            Error: ImagePullBackOff
	  Normal   Pulling    13s (x2 over 27s)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     12s (x2 over 26s)  kubelet            Failed to pull image "kicbase/echo-server": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     12s (x2 over 26s)  kubelet            Error: ErrImagePull
	
	
	Name:             hello-node-connect-9f67c86d4-gldsc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-442811/192.168.49.2
	Start Time:       Sun, 07 Dec 2025 22:45:57 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=9f67c86d4
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:           10.244.0.10
	Controlled By:  ReplicaSet/hello-node-connect-9f67c86d4
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dh5vh (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-dh5vh:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  6m4s                  default-scheduler  Successfully assigned default/hello-node-connect-9f67c86d4-gldsc to functional-442811
	  Normal   Pulling    3m10s (x5 over 6m4s)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     3m9s (x5 over 6m3s)   kubelet            Failed to pull image "kicbase/echo-server": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     3m9s (x5 over 6m3s)   kubelet            Error: ErrImagePull
	  Warning  Failed     61s (x20 over 6m3s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    48s (x21 over 6m3s)   kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             mysql-844cf969f6-zm2lh
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-442811/192.168.49.2
	Start Time:       Sun, 07 Dec 2025 22:45:51 +0000
	Labels:           app=mysql
	                  pod-template-hash=844cf969f6
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/mysql-844cf969f6
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-48z94 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-48z94:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  6m9s                   default-scheduler  Successfully assigned default/mysql-844cf969f6-zm2lh to functional-442811
	  Warning  Failed     6m8s                   kubelet            Failed to pull image "docker.io/mysql:5.7": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    3m20s (x5 over 6m9s)   kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     3m19s (x5 over 6m8s)   kubelet            Error: ErrImagePull
	  Warning  Failed     3m19s (x4 over 5m53s)  kubelet            Failed to pull image "docker.io/mysql:5.7": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     63s (x20 over 6m8s)    kubelet            Error: ImagePullBackOff
	  Normal   BackOff    50s (x21 over 6m8s)    kubelet            Back-off pulling image "docker.io/mysql:5.7"
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-442811/192.168.49.2
	Start Time:       Sun, 07 Dec 2025 22:45:54 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-r8xnp (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-r8xnp:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  6m7s                 default-scheduler  Successfully assigned default/nginx-svc to functional-442811
	  Warning  Failed     4m33s                kubelet            Failed to pull image "docker.io/nginx:alpine": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    3m1s (x5 over 6m7s)  kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     3m (x4 over 6m6s)    kubelet            Failed to pull image "docker.io/nginx:alpine": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     3m (x5 over 6m6s)    kubelet            Error: ErrImagePull
	  Warning  Failed     55s (x20 over 6m6s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    40s (x21 over 6m6s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-442811/192.168.49.2
	Start Time:       Sun, 07 Dec 2025 22:45:59 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vt2bl (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-vt2bl:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  6m2s                  default-scheduler  Successfully assigned default/sp-pod to functional-442811
	  Normal   Pulling    3m14s (x5 over 6m2s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     3m13s (x5 over 6m1s)  kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     3m13s (x5 over 6m1s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    59s (x21 over 6m1s)   kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     59s (x21 over 6m1s)   kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (367.68s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (602.53s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-442811 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-844cf969f6-zm2lh" [e691e40e-e4d5-4b7e-b852-ca016ddf9542] Pending
helpers_test.go:352: "mysql-844cf969f6-zm2lh" [e691e40e-e4d5-4b7e-b852-ca016ddf9542] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: WARNING: pod list for "default" "app=mysql" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1804: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: pod "app=mysql" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1804: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-442811 -n functional-442811
functional_test.go:1804: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: showing logs for failed pods as of 2025-12-07 22:55:51.938896357 +0000 UTC m=+1582.224948937
functional_test.go:1804: (dbg) Run:  kubectl --context functional-442811 describe po mysql-844cf969f6-zm2lh -n default
functional_test.go:1804: (dbg) kubectl --context functional-442811 describe po mysql-844cf969f6-zm2lh -n default:
Name:             mysql-844cf969f6-zm2lh
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-442811/192.168.49.2
Start Time:       Sun, 07 Dec 2025 22:45:51 +0000
Labels:           app=mysql
pod-template-hash=844cf969f6
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/mysql-844cf969f6
Containers:
mysql:
Container ID:   
Image:          docker.io/mysql:5.7
Image ID:       
Port:           3306/TCP (mysql)
Host Port:      0/TCP (mysql)
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Limits:
cpu:     700m
memory:  700Mi
Requests:
cpu:     600m
memory:  512Mi
Environment:
MYSQL_ROOT_PASSWORD:  password
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-48z94 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-48z94:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/mysql-844cf969f6-zm2lh to functional-442811
Warning  Failed     9m59s                   kubelet            Failed to pull image "docker.io/mysql:5.7": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    7m11s (x5 over 10m)     kubelet            Pulling image "docker.io/mysql:5.7"
Warning  Failed     7m10s (x5 over 9m59s)   kubelet            Error: ErrImagePull
Warning  Failed     7m10s (x4 over 9m44s)   kubelet            Failed to pull image "docker.io/mysql:5.7": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     4m54s (x20 over 9m59s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m41s (x21 over 9m59s)  kubelet            Back-off pulling image "docker.io/mysql:5.7"
functional_test.go:1804: (dbg) Run:  kubectl --context functional-442811 logs mysql-844cf969f6-zm2lh -n default
functional_test.go:1804: (dbg) Non-zero exit: kubectl --context functional-442811 logs mysql-844cf969f6-zm2lh -n default: exit status 1 (71.105938ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "mysql" in pod "mysql-844cf969f6-zm2lh" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1804: kubectl --context functional-442811 logs mysql-844cf969f6-zm2lh -n default: exit status 1
functional_test.go:1806: failed waiting for mysql pod: app=mysql within 10m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-442811
helpers_test.go:243: (dbg) docker inspect functional-442811:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1699178f8a7c3755a5daa3ce9dad5f01448b5919784eccd2fc40c26ff389c762",
	        "Created": "2025-12-07T22:43:22.049081307Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 456864,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-07T22:43:22.089713066Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
	        "ResolvConfPath": "/var/lib/docker/containers/1699178f8a7c3755a5daa3ce9dad5f01448b5919784eccd2fc40c26ff389c762/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1699178f8a7c3755a5daa3ce9dad5f01448b5919784eccd2fc40c26ff389c762/hostname",
	        "HostsPath": "/var/lib/docker/containers/1699178f8a7c3755a5daa3ce9dad5f01448b5919784eccd2fc40c26ff389c762/hosts",
	        "LogPath": "/var/lib/docker/containers/1699178f8a7c3755a5daa3ce9dad5f01448b5919784eccd2fc40c26ff389c762/1699178f8a7c3755a5daa3ce9dad5f01448b5919784eccd2fc40c26ff389c762-json.log",
	        "Name": "/functional-442811",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-442811:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-442811",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1699178f8a7c3755a5daa3ce9dad5f01448b5919784eccd2fc40c26ff389c762",
	                "LowerDir": "/var/lib/docker/overlay2/5d7c45936f6119213b3285a7f6a06509ba6a63e767162da1ee6f414e72615470-init/diff:/var/lib/docker/overlay2/72e2c0d34d3438044c6ca8754190358557351efc0aeb527bd1060ce52e748152/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5d7c45936f6119213b3285a7f6a06509ba6a63e767162da1ee6f414e72615470/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5d7c45936f6119213b3285a7f6a06509ba6a63e767162da1ee6f414e72615470/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5d7c45936f6119213b3285a7f6a06509ba6a63e767162da1ee6f414e72615470/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-442811",
	                "Source": "/var/lib/docker/volumes/functional-442811/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-442811",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-442811",
	                "name.minikube.sigs.k8s.io": "functional-442811",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "31fe22828acd889527928fad9cf9de4644c6693bf1715496a16bc2b07706d2c3",
	            "SandboxKey": "/var/run/docker/netns/31fe22828acd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33167"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33168"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33171"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33169"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33170"
	                    }
	                ]
	            },
	            "Networks": {
	                "functional-442811": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0b711061d8c4c0d449da65ff00c005cf89c83f72d15bf795a0f752ebfb4033e6",
	                    "EndpointID": "056c3331b34f02b412809772934c779a89252a7366e762ac002e00a13fc17922",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "96:63:b1:92:b7:e8",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-442811",
	                        "1699178f8a7c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-442811 -n functional-442811
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-442811 logs -n 25: (1.022516419s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                      ARGS                                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-442811 ssh -- ls -la /mount-9p                                                                                                       │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │ 07 Dec 25 22:52 UTC │
	│ ssh            │ functional-442811 ssh sudo umount -f /mount-9p                                                                                                  │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │                     │
	│ mount          │ -p functional-442811 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo68706730/001:/mount3 --alsologtostderr -v=1              │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │                     │
	│ mount          │ -p functional-442811 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo68706730/001:/mount2 --alsologtostderr -v=1              │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │                     │
	│ ssh            │ functional-442811 ssh findmnt -T /mount1                                                                                                        │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │                     │
	│ mount          │ -p functional-442811 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo68706730/001:/mount1 --alsologtostderr -v=1              │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │                     │
	│ ssh            │ functional-442811 ssh findmnt -T /mount1                                                                                                        │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │ 07 Dec 25 22:52 UTC │
	│ ssh            │ functional-442811 ssh findmnt -T /mount2                                                                                                        │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │ 07 Dec 25 22:52 UTC │
	│ ssh            │ functional-442811 ssh findmnt -T /mount3                                                                                                        │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │ 07 Dec 25 22:52 UTC │
	│ mount          │ -p functional-442811 --kill=true                                                                                                                │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │                     │
	│ start          │ -p functional-442811 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-beta.0 │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │                     │
	│ start          │ -p functional-442811 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-beta.0 │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │                     │
	│ start          │ -p functional-442811 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-beta.0           │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-442811 --alsologtostderr -v=1                                                                                  │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │                     │
	│ license        │                                                                                                                                                 │ minikube          │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │ 07 Dec 25 22:52 UTC │
	│ update-context │ functional-442811 update-context --alsologtostderr -v=2                                                                                         │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │ 07 Dec 25 22:52 UTC │
	│ update-context │ functional-442811 update-context --alsologtostderr -v=2                                                                                         │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │ 07 Dec 25 22:52 UTC │
	│ update-context │ functional-442811 update-context --alsologtostderr -v=2                                                                                         │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │ 07 Dec 25 22:52 UTC │
	│ image          │ functional-442811 image ls --format short --alsologtostderr                                                                                     │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │ 07 Dec 25 22:52 UTC │
	│ image          │ functional-442811 image ls --format yaml --alsologtostderr                                                                                      │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │ 07 Dec 25 22:52 UTC │
	│ ssh            │ functional-442811 ssh pgrep buildkitd                                                                                                           │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │                     │
	│ image          │ functional-442811 image build -t localhost/my-image:functional-442811 testdata/build --alsologtostderr                                          │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │ 07 Dec 25 22:52 UTC │
	│ image          │ functional-442811 image ls                                                                                                                      │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │ 07 Dec 25 22:52 UTC │
	│ image          │ functional-442811 image ls --format json --alsologtostderr                                                                                      │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │ 07 Dec 25 22:52 UTC │
	│ image          │ functional-442811 image ls --format table --alsologtostderr                                                                                     │ functional-442811 │ jenkins │ v1.37.0 │ 07 Dec 25 22:52 UTC │ 07 Dec 25 22:52 UTC │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/07 22:52:15
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1207 22:52:15.351962  481599 out.go:360] Setting OutFile to fd 1 ...
	I1207 22:52:15.352191  481599 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:52:15.352201  481599 out.go:374] Setting ErrFile to fd 2...
	I1207 22:52:15.352205  481599 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:52:15.352433  481599 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-393577/.minikube/bin
	I1207 22:52:15.352871  481599 out.go:368] Setting JSON to false
	I1207 22:52:15.353910  481599 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":5678,"bootTime":1765142257,"procs":235,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 22:52:15.353965  481599 start.go:143] virtualization: kvm guest
	I1207 22:52:15.355654  481599 out.go:179] * [functional-442811] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1207 22:52:15.357025  481599 out.go:179]   - MINIKUBE_LOCATION=22054
	I1207 22:52:15.357048  481599 notify.go:221] Checking for updates...
	I1207 22:52:15.359227  481599 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 22:52:15.360452  481599 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-393577/kubeconfig
	I1207 22:52:15.361525  481599 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-393577/.minikube
	I1207 22:52:15.362415  481599 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1207 22:52:15.363417  481599 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 22:52:15.364856  481599 config.go:182] Loaded profile config "functional-442811": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1207 22:52:15.365411  481599 driver.go:422] Setting default libvirt URI to qemu:///system
	I1207 22:52:15.388991  481599 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1207 22:52:15.389173  481599 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 22:52:15.446662  481599 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:54 SystemTime:2025-12-07 22:52:15.436555194 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 22:52:15.446764  481599 docker.go:319] overlay module found
	I1207 22:52:15.448725  481599 out.go:179] * Using the docker driver based on existing profile
	I1207 22:52:15.449760  481599 start.go:309] selected driver: docker
	I1207 22:52:15.449777  481599 start.go:927] validating driver "docker" against &{Name:functional-442811 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-442811 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mo
untOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 22:52:15.449897  481599 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 22:52:15.450002  481599 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 22:52:15.506317  481599 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:54 SystemTime:2025-12-07 22:52:15.497105264 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 22:52:15.507092  481599 cni.go:84] Creating CNI manager for ""
	I1207 22:52:15.507183  481599 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1207 22:52:15.507250  481599 start.go:353] cluster config:
	{Name:functional-442811 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-442811 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 22:52:15.509831  481599 out.go:179] * dry-run validation complete!
	
	
	==> Docker <==
	Dec 07 22:52:17 functional-442811 dockerd[7407]: time="2025-12-07T22:52:17.203234705Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Dec 07 22:52:17 functional-442811 dockerd[7407]: time="2025-12-07T22:52:17.689649853Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 07 22:52:17 functional-442811 dockerd[7407]: time="2025-12-07T22:52:17.927471618Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Dec 07 22:52:18 functional-442811 dockerd[7407]: time="2025-12-07T22:52:18.410208645Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 07 22:52:19 functional-442811 dockerd[7407]: time="2025-12-07T22:52:19.413243936Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 07 22:52:23 functional-442811 dockerd[7407]: time="2025-12-07T22:52:23.298913078Z" level=info msg="sbJoin: gwep4 ''->'ade309e1b37d', gwep6 ''->''"
	Dec 07 22:52:30 functional-442811 dockerd[7407]: time="2025-12-07T22:52:30.524525032Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Dec 07 22:52:31 functional-442811 dockerd[7407]: time="2025-12-07T22:52:31.003907059Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 07 22:52:33 functional-442811 dockerd[7407]: time="2025-12-07T22:52:33.519999508Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Dec 07 22:52:34 functional-442811 dockerd[7407]: time="2025-12-07T22:52:34.000741842Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 07 22:52:57 functional-442811 dockerd[7407]: time="2025-12-07T22:52:57.521343976Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Dec 07 22:52:57 functional-442811 dockerd[7407]: time="2025-12-07T22:52:57.996442965Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 07 22:53:03 functional-442811 dockerd[7407]: time="2025-12-07T22:53:03.518951686Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Dec 07 22:53:04 functional-442811 dockerd[7407]: time="2025-12-07T22:53:04.003130172Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 07 22:53:10 functional-442811 dockerd[7407]: time="2025-12-07T22:53:10.260438894Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 07 22:53:44 functional-442811 dockerd[7407]: time="2025-12-07T22:53:44.519195114Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Dec 07 22:53:45 functional-442811 dockerd[7407]: time="2025-12-07T22:53:45.285735209Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 07 22:53:45 functional-442811 cri-dockerd[7726]: time="2025-12-07T22:53:45Z" level=info msg="Stop pulling image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: Pulling from kubernetesui/dashboard"
	Dec 07 22:53:48 functional-442811 dockerd[7407]: time="2025-12-07T22:53:48.516413636Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Dec 07 22:53:48 functional-442811 dockerd[7407]: time="2025-12-07T22:53:48.991029698Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 07 22:54:40 functional-442811 dockerd[7407]: time="2025-12-07T22:54:40.282328309Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 07 22:55:06 functional-442811 dockerd[7407]: time="2025-12-07T22:55:06.522724087Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Dec 07 22:55:07 functional-442811 dockerd[7407]: time="2025-12-07T22:55:07.010549287Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 07 22:55:13 functional-442811 dockerd[7407]: time="2025-12-07T22:55:13.522616336Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Dec 07 22:55:14 functional-442811 dockerd[7407]: time="2025-12-07T22:55:14.004231447Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	e878acdb79575       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   3 minutes ago       Exited              mount-munger              0                   cf598e6d0b5c8       busybox-mount                               default
	1dbbaf0070f63       aa5e3ebc0dfed                                                                                         10 minutes ago      Running             coredns                   2                   ab34a5f620355       coredns-7d764666f9-j4t8w                    kube-system
	88ac0a9f95196       6e38f40d628db                                                                                         10 minutes ago      Running             storage-provisioner       3                   8edd26c4e4b1f       storage-provisioner                         kube-system
	1241f85066cfe       8a4ded35a3eb1                                                                                         10 minutes ago      Running             kube-proxy                2                   f9a2027ed4081       kube-proxy-d52sm                            kube-system
	83add6b3124c5       a3e246e9556e9                                                                                         10 minutes ago      Running             etcd                      2                   34b1626cbb491       etcd-functional-442811                      kube-system
	aeabb2a736d2e       7bb6219ddab95                                                                                         10 minutes ago      Running             kube-scheduler            2                   582a63f870e0e       kube-scheduler-functional-442811            kube-system
	81b5af16f1153       45f3cc72d235f                                                                                         10 minutes ago      Running             kube-controller-manager   2                   4405a0223b979       kube-controller-manager-functional-442811   kube-system
	c9ff972b6d4dd       aa9d02839d8de                                                                                         10 minutes ago      Running             kube-apiserver            0                   f2eb5859b259b       kube-apiserver-functional-442811            kube-system
	0103126cf1fac       6e38f40d628db                                                                                         11 minutes ago      Exited              storage-provisioner       2                   18663e96ba841       storage-provisioner                         kube-system
	4e35a7433174e       aa5e3ebc0dfed                                                                                         11 minutes ago      Exited              coredns                   1                   c42fbed28717d       coredns-7d764666f9-j4t8w                    kube-system
	8ccc720180cb5       7bb6219ddab95                                                                                         11 minutes ago      Exited              kube-scheduler            1                   fb9985bca5bd0       kube-scheduler-functional-442811            kube-system
	79247fb2be94f       45f3cc72d235f                                                                                         11 minutes ago      Exited              kube-controller-manager   1                   8e814ef6acfa3       kube-controller-manager-functional-442811   kube-system
	ff1925c035bbc       a3e246e9556e9                                                                                         11 minutes ago      Exited              etcd                      1                   35d62fd2758ef       etcd-functional-442811                      kube-system
	c731d93d46317       8a4ded35a3eb1                                                                                         11 minutes ago      Exited              kube-proxy                1                   eda0c18c12a74       kube-proxy-d52sm                            kube-system
	
	
	==> coredns [1dbbaf0070f6] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:42546 - 63724 "HINFO IN 3795866867150485848.7821617772591164036. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.021891038s
	
	
	==> coredns [4e35a7433174] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:46129 - 64813 "HINFO IN 5819608903442519064.4892300565990545133. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.019403977s
	
	
	==> describe nodes <==
	Name:               functional-442811
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-442811
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5cff42f65f8043a145b28acc2164a21aaf35c47
	                    minikube.k8s.io/name=functional-442811
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_07T22_43_35_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 07 Dec 2025 22:43:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-442811
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 07 Dec 2025 22:55:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 07 Dec 2025 22:52:25 +0000   Sun, 07 Dec 2025 22:43:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 07 Dec 2025 22:52:25 +0000   Sun, 07 Dec 2025 22:43:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 07 Dec 2025 22:52:25 +0000   Sun, 07 Dec 2025 22:43:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 07 Dec 2025 22:52:25 +0000   Sun, 07 Dec 2025 22:43:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-442811
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                75fc37d3-ccae-49e7-9308-4a7688634355
	  Boot ID:                    10618540-d4ef-4c75-8cf1-8b1c0379fe5e
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://29.1.2
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-5758569b79-6bwdx                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m20s
	  default                     hello-node-connect-9f67c86d4-gldsc            0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m56s
	  default                     mysql-844cf969f6-zm2lh                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     10m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m59s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m54s
	  kube-system                 coredns-7d764666f9-j4t8w                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     12m
	  kube-system                 etcd-functional-442811                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kube-apiserver-functional-442811              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-442811     200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-d52sm                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-442811              100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-5565989548-6kf2d    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m37s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-chtwb          0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (16%)  700m (8%)
	  memory             682Mi (2%)   870Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  12m   node-controller  Node functional-442811 event: Registered Node functional-442811 in Controller
	  Normal  RegisteredNode  11m   node-controller  Node functional-442811 event: Registered Node functional-442811 in Controller
	  Normal  RegisteredNode  10m   node-controller  Node functional-442811 event: Registered Node functional-442811 in Controller
	
	
	==> dmesg <==
	[  +0.000025] ll header: 00000000: fe fa 5c 53 bf e3 96 63 b1 92 b7 e8 08 00
	[  +2.047884] IPv4: martian source 10.110.150.78 from 192.168.49.1, on dev br-0b711061d8c4
	[  +0.000024] ll header: 00000000: fe fa 5c 53 bf e3 96 63 b1 92 b7 e8 08 00
	[  +4.031738] IPv4: martian source 10.110.150.78 from 192.168.49.1, on dev br-0b711061d8c4
	[  +0.000022] ll header: 00000000: fe fa 5c 53 bf e3 96 63 b1 92 b7 e8 08 00
	[  +8.383561] IPv4: martian source 10.110.150.78 from 192.168.49.1, on dev br-0b711061d8c4
	[  +0.000008] ll header: 00000000: fe fa 5c 53 bf e3 96 63 b1 92 b7 e8 08 00
	[  +3.048952] IPv4: martian source 10.110.150.78 from 192.168.49.1, on dev br-0b711061d8c4
	[  +0.000008] ll header: 00000000: fe fa 5c 53 bf e3 96 63 b1 92 b7 e8 08 00
	[  +1.046793] IPv4: martian source 10.110.150.78 from 192.168.49.1, on dev br-0b711061d8c4
	[  +0.000008] ll header: 00000000: fe fa 5c 53 bf e3 96 63 b1 92 b7 e8 08 00
	[  +1.023934] IPv4: martian source 10.110.150.78 from 192.168.49.1, on dev br-0b711061d8c4
	[  +0.000022] ll header: 00000000: fe fa 5c 53 bf e3 96 63 b1 92 b7 e8 08 00
	[  +1.023938] IPv4: martian source 10.110.150.78 from 192.168.49.1, on dev br-0b711061d8c4
	[  +0.000007] ll header: 00000000: fe fa 5c 53 bf e3 96 63 b1 92 b7 e8 08 00
	[  +1.023928] IPv4: martian source 10.110.150.78 from 192.168.49.1, on dev br-0b711061d8c4
	[  +0.000023] ll header: 00000000: fe fa 5c 53 bf e3 96 63 b1 92 b7 e8 08 00
	[  +1.023939] IPv4: martian source 10.110.150.78 from 192.168.49.1, on dev br-0b711061d8c4
	[  +0.000008] ll header: 00000000: fe fa 5c 53 bf e3 96 63 b1 92 b7 e8 08 00
	[  +2.047870] IPv4: martian source 10.110.150.78 from 192.168.49.1, on dev br-0b711061d8c4
	[  +0.000008] ll header: 00000000: fe fa 5c 53 bf e3 96 63 b1 92 b7 e8 08 00
	[  +4.031775] IPv4: martian source 10.110.150.78 from 192.168.49.1, on dev br-0b711061d8c4
	[  +0.000024] ll header: 00000000: fe fa 5c 53 bf e3 96 63 b1 92 b7 e8 08 00
	[  +8.255538] IPv4: martian source 10.110.150.78 from 192.168.49.1, on dev br-0b711061d8c4
	[  +0.000025] ll header: 00000000: fe fa 5c 53 bf e3 96 63 b1 92 b7 e8 08 00
	
	
	==> etcd [83add6b3124c] <==
	{"level":"warn","ts":"2025-12-07T22:45:26.471450Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.478374Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.484420Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.492052Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.498708Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.504875Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.511207Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.528806Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.541786Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.548777Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.554976Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.561284Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.568001Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.574155Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.580316Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.587757Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.594537Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.613389Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.619780Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.627493Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.633965Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:45:26.679295Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42392","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-07T22:55:26.195035Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1217}
	{"level":"info","ts":"2025-12-07T22:55:26.214333Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1217,"took":"18.921522ms","hash":2299821684,"current-db-size-bytes":4055040,"current-db-size":"4.1 MB","current-db-size-in-use-bytes":2117632,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2025-12-07T22:55:26.214380Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2299821684,"revision":1217,"compact-revision":-1}
	
	
	==> etcd [ff1925c035bb] <==
	{"level":"warn","ts":"2025-12-07T22:44:37.422211Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.431053Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.437500Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.444765Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.450951Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.457308Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.464461Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.470302Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.476974Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.486685Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.492641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.499833Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.508638Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.515414Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.522831Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.529466Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.535768Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.541989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.548192Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.555143Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.580067Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.586763Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.593170Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.599554Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:44:37.644544Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49272","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 22:55:53 up  1:38,  0 user,  load average: 0.02, 0.16, 0.70
	Linux functional-442811 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [c9ff972b6d4d] <==
	I1207 22:45:27.112240       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1207 22:45:27.113785       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1207 22:45:27.113805       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1207 22:45:27.115232       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1207 22:45:27.116901       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1207 22:45:27.127481       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1207 22:45:27.451017       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1207 22:45:27.451017       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1207 22:45:28.013053       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1207 22:45:28.426205       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1207 22:45:28.460038       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1207 22:45:28.480994       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1207 22:45:28.487143       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1207 22:45:30.606652       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1207 22:45:30.653920       1 controller.go:667] quota admission added evaluator for: endpoints
	I1207 22:45:30.704912       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1207 22:45:46.659446       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.110.200.238"}
	I1207 22:45:51.565249       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.111.181.88"}
	I1207 22:45:54.076127       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.100.134.119"}
	I1207 22:45:57.145808       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.99.16.235"}
	I1207 22:51:33.902860       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.110.151.143"}
	I1207 22:52:16.358533       1 controller.go:667] quota admission added evaluator for: namespaces
	I1207 22:52:16.463265       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.17.54"}
	I1207 22:52:16.477891       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.206.56"}
	I1207 22:55:27.045591       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [79247fb2be94] <==
	I1207 22:44:41.225515       1 shared_informer.go:377] "Caches are synced"
	I1207 22:44:41.225538       1 shared_informer.go:377] "Caches are synced"
	I1207 22:44:41.225327       1 shared_informer.go:377] "Caches are synced"
	I1207 22:44:41.225806       1 shared_informer.go:377] "Caches are synced"
	I1207 22:44:41.225845       1 shared_informer.go:377] "Caches are synced"
	I1207 22:44:41.225900       1 shared_informer.go:377] "Caches are synced"
	I1207 22:44:41.226080       1 shared_informer.go:377] "Caches are synced"
	I1207 22:44:41.226093       1 shared_informer.go:377] "Caches are synced"
	I1207 22:44:41.226237       1 shared_informer.go:377] "Caches are synced"
	I1207 22:44:41.226254       1 shared_informer.go:377] "Caches are synced"
	I1207 22:44:41.226316       1 shared_informer.go:377] "Caches are synced"
	I1207 22:44:41.226433       1 shared_informer.go:377] "Caches are synced"
	I1207 22:44:41.226705       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1207 22:44:41.226904       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="functional-442811"
	I1207 22:44:41.226954       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1207 22:44:41.227069       1 shared_informer.go:377] "Caches are synced"
	I1207 22:44:41.227104       1 shared_informer.go:377] "Caches are synced"
	I1207 22:44:41.227789       1 shared_informer.go:377] "Caches are synced"
	I1207 22:44:41.227880       1 shared_informer.go:377] "Caches are synced"
	I1207 22:44:41.227875       1 shared_informer.go:377] "Caches are synced"
	I1207 22:44:41.229278       1 shared_informer.go:377] "Caches are synced"
	I1207 22:44:41.322481       1 shared_informer.go:377] "Caches are synced"
	I1207 22:44:41.324646       1 shared_informer.go:377] "Caches are synced"
	I1207 22:44:41.324662       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1207 22:44:41.324666       1 garbagecollector.go:169] "Proceeding to collect garbage"
	
	
	==> kube-controller-manager [81b5af16f115] <==
	I1207 22:45:30.258361       1 shared_informer.go:377] "Caches are synced"
	I1207 22:45:30.258392       1 shared_informer.go:377] "Caches are synced"
	I1207 22:45:30.258433       1 range_allocator.go:177] "Sending events to api server"
	I1207 22:45:30.258518       1 shared_informer.go:377] "Caches are synced"
	I1207 22:45:30.258521       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1207 22:45:30.258698       1 shared_informer.go:370] "Waiting for caches to sync"
	I1207 22:45:30.258704       1 shared_informer.go:377] "Caches are synced"
	I1207 22:45:30.258623       1 shared_informer.go:377] "Caches are synced"
	I1207 22:45:30.258586       1 shared_informer.go:377] "Caches are synced"
	I1207 22:45:30.258614       1 shared_informer.go:377] "Caches are synced"
	I1207 22:45:30.258633       1 shared_informer.go:377] "Caches are synced"
	I1207 22:45:30.258627       1 shared_informer.go:377] "Caches are synced"
	I1207 22:45:30.258636       1 shared_informer.go:377] "Caches are synced"
	I1207 22:45:30.266934       1 shared_informer.go:370] "Waiting for caches to sync"
	I1207 22:45:30.266989       1 shared_informer.go:377] "Caches are synced"
	I1207 22:45:30.358127       1 shared_informer.go:377] "Caches are synced"
	I1207 22:45:30.358147       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1207 22:45:30.358154       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1207 22:45:30.367867       1 shared_informer.go:377] "Caches are synced"
	E1207 22:52:16.405357       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1207 22:52:16.409152       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1207 22:52:16.414778       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1207 22:52:16.416310       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1207 22:52:16.419791       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1207 22:52:16.422778       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [1241f85066cf] <==
	I1207 22:45:27.866139       1 server_linux.go:53] "Using iptables proxy"
	I1207 22:45:27.929383       1 shared_informer.go:370] "Waiting for caches to sync"
	I1207 22:45:28.029574       1 shared_informer.go:377] "Caches are synced"
	I1207 22:45:28.029634       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1207 22:45:28.029756       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1207 22:45:28.054488       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1207 22:45:28.054553       1 server_linux.go:136] "Using iptables Proxier"
	I1207 22:45:28.061247       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1207 22:45:28.061709       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1207 22:45:28.061749       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 22:45:28.063126       1 config.go:106] "Starting endpoint slice config controller"
	I1207 22:45:28.063159       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1207 22:45:28.063200       1 config.go:200] "Starting service config controller"
	I1207 22:45:28.063206       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1207 22:45:28.063199       1 config.go:403] "Starting serviceCIDR config controller"
	I1207 22:45:28.063226       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1207 22:45:28.063264       1 config.go:309] "Starting node config controller"
	I1207 22:45:28.063271       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1207 22:45:28.063278       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1207 22:45:28.163244       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1207 22:45:28.163313       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1207 22:45:28.163332       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [c731d93d4631] <==
	I1207 22:44:36.479925       1 server_linux.go:53] "Using iptables proxy"
	I1207 22:44:36.568255       1 shared_informer.go:370] "Waiting for caches to sync"
	I1207 22:44:38.168788       1 shared_informer.go:377] "Caches are synced"
	I1207 22:44:38.168826       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1207 22:44:38.169011       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1207 22:44:38.212997       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1207 22:44:38.213059       1 server_linux.go:136] "Using iptables Proxier"
	I1207 22:44:38.218592       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1207 22:44:38.219063       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1207 22:44:38.219101       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 22:44:38.220955       1 config.go:403] "Starting serviceCIDR config controller"
	I1207 22:44:38.220984       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1207 22:44:38.220986       1 config.go:106] "Starting endpoint slice config controller"
	I1207 22:44:38.221014       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1207 22:44:38.221097       1 config.go:309] "Starting node config controller"
	I1207 22:44:38.221103       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1207 22:44:38.221109       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1207 22:44:38.221482       1 config.go:200] "Starting service config controller"
	I1207 22:44:38.221666       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1207 22:44:38.321151       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1207 22:44:38.321157       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1207 22:44:38.322219       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [8ccc720180cb] <==
	I1207 22:44:36.923472       1 serving.go:386] Generated self-signed cert in-memory
	W1207 22:44:38.008074       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1207 22:44:38.008306       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1207 22:44:38.008328       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1207 22:44:38.008462       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1207 22:44:38.053370       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1207 22:44:38.053407       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 22:44:38.056294       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1207 22:44:38.056353       1 shared_informer.go:370] "Waiting for caches to sync"
	I1207 22:44:38.056369       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1207 22:44:38.056891       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1207 22:44:38.156767       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-scheduler [aeabb2a736d2] <==
	I1207 22:45:25.216680       1 serving.go:386] Generated self-signed cert in-memory
	W1207 22:45:27.046224       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1207 22:45:27.046264       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found, role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found]
	W1207 22:45:27.046277       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1207 22:45:27.046286       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1207 22:45:27.062035       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1207 22:45:27.062058       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 22:45:27.063812       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1207 22:45:27.063845       1 shared_informer.go:370] "Waiting for caches to sync"
	I1207 22:45:27.063924       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1207 22:45:27.064004       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1207 22:45:27.164533       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 07 22:55:21 functional-442811 kubelet[8486]: E1207 22:55:21.282348    8486 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-chtwb" podUID="04735f3d-d1e5-4bcc-b82f-7645afbc28fb"
	Dec 07 22:55:22 functional-442811 kubelet[8486]: E1207 22:55:22.279405    8486 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="a0787535-37fd-46f0-bd6d-f603d1557ee2"
	Dec 07 22:55:22 functional-442811 kubelet[8486]: E1207 22:55:22.279548    8486 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-5758569b79-6bwdx" podUID="bb6e8c71-aaa3-4a46-9946-a5ac8718a889"
	Dec 07 22:55:26 functional-442811 kubelet[8486]: E1207 22:55:26.281769    8486 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-844cf969f6-zm2lh" podUID="e691e40e-e4d5-4b7e-b852-ca016ddf9542"
	Dec 07 22:55:28 functional-442811 kubelet[8486]: E1207 22:55:28.279341    8486 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-6kf2d" containerName="dashboard-metrics-scraper"
	Dec 07 22:55:28 functional-442811 kubelet[8486]: E1207 22:55:28.282156    8486 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-6kf2d" podUID="5a9529c4-8a48-414f-8791-063053747e2e"
	Dec 07 22:55:29 functional-442811 kubelet[8486]: E1207 22:55:29.282311    8486 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="df055e75-dc3b-436d-9b15-60ec788da8a6"
	Dec 07 22:55:32 functional-442811 kubelet[8486]: E1207 22:55:32.280249    8486 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-9f67c86d4-gldsc" podUID="df13df8d-919f-420d-b0e4-4e5e489d4991"
	Dec 07 22:55:34 functional-442811 kubelet[8486]: E1207 22:55:34.280677    8486 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-5758569b79-6bwdx" podUID="bb6e8c71-aaa3-4a46-9946-a5ac8718a889"
	Dec 07 22:55:35 functional-442811 kubelet[8486]: E1207 22:55:35.279166    8486 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-chtwb" containerName="kubernetes-dashboard"
	Dec 07 22:55:35 functional-442811 kubelet[8486]: E1207 22:55:35.282123    8486 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-chtwb" podUID="04735f3d-d1e5-4bcc-b82f-7645afbc28fb"
	Dec 07 22:55:36 functional-442811 kubelet[8486]: E1207 22:55:36.279670    8486 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="a0787535-37fd-46f0-bd6d-f603d1557ee2"
	Dec 07 22:55:38 functional-442811 kubelet[8486]: E1207 22:55:38.279898    8486 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-442811" containerName="kube-controller-manager"
	Dec 07 22:55:39 functional-442811 kubelet[8486]: E1207 22:55:39.279481    8486 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-442811" containerName="kube-scheduler"
	Dec 07 22:55:40 functional-442811 kubelet[8486]: E1207 22:55:40.281854    8486 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-844cf969f6-zm2lh" podUID="e691e40e-e4d5-4b7e-b852-ca016ddf9542"
	Dec 07 22:55:40 functional-442811 kubelet[8486]: E1207 22:55:40.282205    8486 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="df055e75-dc3b-436d-9b15-60ec788da8a6"
	Dec 07 22:55:42 functional-442811 kubelet[8486]: E1207 22:55:42.279782    8486 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-j4t8w" containerName="coredns"
	Dec 07 22:55:43 functional-442811 kubelet[8486]: E1207 22:55:43.279001    8486 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-6kf2d" containerName="dashboard-metrics-scraper"
	Dec 07 22:55:43 functional-442811 kubelet[8486]: E1207 22:55:43.281702    8486 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-6kf2d" podUID="5a9529c4-8a48-414f-8791-063053747e2e"
	Dec 07 22:55:47 functional-442811 kubelet[8486]: E1207 22:55:47.280502    8486 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-9f67c86d4-gldsc" podUID="df13df8d-919f-420d-b0e4-4e5e489d4991"
	Dec 07 22:55:48 functional-442811 kubelet[8486]: E1207 22:55:48.279877    8486 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-chtwb" containerName="kubernetes-dashboard"
	Dec 07 22:55:48 functional-442811 kubelet[8486]: E1207 22:55:48.282443    8486 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-chtwb" podUID="04735f3d-d1e5-4bcc-b82f-7645afbc28fb"
	Dec 07 22:55:49 functional-442811 kubelet[8486]: E1207 22:55:49.280308    8486 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-5758569b79-6bwdx" podUID="bb6e8c71-aaa3-4a46-9946-a5ac8718a889"
	Dec 07 22:55:51 functional-442811 kubelet[8486]: E1207 22:55:51.280253    8486 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="a0787535-37fd-46f0-bd6d-f603d1557ee2"
	Dec 07 22:55:53 functional-442811 kubelet[8486]: E1207 22:55:53.282005    8486 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-844cf969f6-zm2lh" podUID="e691e40e-e4d5-4b7e-b852-ca016ddf9542"
	
	
	==> storage-provisioner [0103126cf1fa] <==
	I1207 22:44:50.702907       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1207 22:44:50.702947       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1207 22:44:50.705054       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:44:54.159918       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:44:58.420300       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:02.019059       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:05.072830       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:08.094990       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:08.100139       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1207 22:45:08.100304       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1207 22:45:08.100474       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-442811_7469dd5e-c834-4569-8c4d-488a475d8a7b!
	I1207 22:45:08.100445       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6744592d-205c-493c-9eed-33025935219a", APIVersion:"v1", ResourceVersion:"558", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-442811_7469dd5e-c834-4569-8c4d-488a475d8a7b became leader
	W1207 22:45:08.102399       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:08.105316       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1207 22:45:08.200705       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-442811_7469dd5e-c834-4569-8c4d-488a475d8a7b!
	W1207 22:45:10.108971       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:10.113858       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:12.117456       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:12.121527       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:14.124245       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:14.128190       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:16.131409       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:16.136192       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:18.138850       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:45:18.142809       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [88ac0a9f9519] <==
	W1207 22:55:27.479163       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:55:29.482361       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:55:29.486066       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:55:31.489947       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:55:31.495795       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:55:33.499861       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:55:33.503827       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:55:35.507272       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:55:35.512494       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:55:37.515807       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:55:37.519868       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:55:39.523393       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:55:39.528704       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:55:41.531737       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:55:41.535576       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:55:43.538781       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:55:43.542359       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:55:45.546018       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:55:45.550981       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:55:47.554041       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:55:47.559155       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:55:49.562664       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:55:49.566584       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:55:51.570049       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:55:51.573747       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-442811 -n functional-442811
helpers_test.go:269: (dbg) Run:  kubectl --context functional-442811 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-5758569b79-6bwdx hello-node-connect-9f67c86d4-gldsc mysql-844cf969f6-zm2lh nginx-svc sp-pod dashboard-metrics-scraper-5565989548-6kf2d kubernetes-dashboard-b84665fb8-chtwb
helpers_test.go:282: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-442811 describe pod busybox-mount hello-node-5758569b79-6bwdx hello-node-connect-9f67c86d4-gldsc mysql-844cf969f6-zm2lh nginx-svc sp-pod dashboard-metrics-scraper-5565989548-6kf2d kubernetes-dashboard-b84665fb8-chtwb
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-442811 describe pod busybox-mount hello-node-5758569b79-6bwdx hello-node-connect-9f67c86d4-gldsc mysql-844cf969f6-zm2lh nginx-svc sp-pod dashboard-metrics-scraper-5565989548-6kf2d kubernetes-dashboard-b84665fb8-chtwb: exit status 1 (97.334269ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-442811/192.168.49.2
	Start Time:       Sun, 07 Dec 2025 22:52:04 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.13
	IPs:
	  IP:  10.244.0.13
	Containers:
	  mount-munger:
	    Container ID:  docker://e878acdb795753bebae0d3951e3f7b095f3224bc0a9d688b6e8a2b128fd36dac
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sun, 07 Dec 2025 22:52:06 +0000
	      Finished:     Sun, 07 Dec 2025 22:52:06 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fxvhm (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-fxvhm:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  3m49s  default-scheduler  Successfully assigned default/busybox-mount to functional-442811
	  Normal  Pulling    3m49s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     3m47s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.967s (1.967s including waiting). Image size: 4403845 bytes.
	  Normal  Created    3m47s  kubelet            Container created
	  Normal  Started    3m47s  kubelet            Container started
	
	
	Name:             hello-node-5758569b79-6bwdx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-442811/192.168.49.2
	Start Time:       Sun, 07 Dec 2025 22:51:33 +0000
	Labels:           app=hello-node
	                  pod-template-hash=5758569b79
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.12
	IPs:
	  IP:           10.244.0.12
	Controlled By:  ReplicaSet/hello-node-5758569b79
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vq8rd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-vq8rd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  4m20s                 default-scheduler  Successfully assigned default/hello-node-5758569b79-6bwdx to functional-442811
	  Normal   Pulling    74s (x5 over 4m19s)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     73s (x5 over 4m18s)   kubelet            Failed to pull image "kicbase/echo-server": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     73s (x5 over 4m18s)   kubelet            Error: ErrImagePull
	  Warning  Failed     19s (x15 over 4m17s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4s (x16 over 4m17s)   kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-9f67c86d4-gldsc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-442811/192.168.49.2
	Start Time:       Sun, 07 Dec 2025 22:45:57 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=9f67c86d4
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:           10.244.0.10
	Controlled By:  ReplicaSet/hello-node-connect-9f67c86d4
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dh5vh (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-dh5vh:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m56s                   default-scheduler  Successfully assigned default/hello-node-connect-9f67c86d4-gldsc to functional-442811
	  Normal   Pulling    7m2s (x5 over 9m56s)    kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m1s (x5 over 9m55s)    kubelet            Failed to pull image "kicbase/echo-server": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     7m1s (x5 over 9m55s)    kubelet            Error: ErrImagePull
	  Warning  Failed     4m53s (x20 over 9m55s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m40s (x21 over 9m55s)  kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             mysql-844cf969f6-zm2lh
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-442811/192.168.49.2
	Start Time:       Sun, 07 Dec 2025 22:45:51 +0000
	Labels:           app=mysql
	                  pod-template-hash=844cf969f6
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/mysql-844cf969f6
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-48z94 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-48z94:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/mysql-844cf969f6-zm2lh to functional-442811
	  Warning  Failed     10m                    kubelet            Failed to pull image "docker.io/mysql:5.7": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    7m12s (x5 over 10m)    kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     7m11s (x5 over 10m)    kubelet            Error: ErrImagePull
	  Warning  Failed     7m11s (x4 over 9m45s)  kubelet            Failed to pull image "docker.io/mysql:5.7": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     4m55s (x20 over 10m)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    0s (x41 over 10m)      kubelet            Back-off pulling image "docker.io/mysql:5.7"
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-442811/192.168.49.2
	Start Time:       Sun, 07 Dec 2025 22:45:54 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-r8xnp (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-r8xnp:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m59s                   default-scheduler  Successfully assigned default/nginx-svc to functional-442811
	  Warning  Failed     8m25s                   kubelet            Failed to pull image "docker.io/nginx:alpine": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    6m53s (x5 over 9m59s)   kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     6m52s (x4 over 9m58s)   kubelet            Failed to pull image "docker.io/nginx:alpine": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     6m52s (x5 over 9m58s)   kubelet            Error: ErrImagePull
	  Warning  Failed     4m47s (x20 over 9m58s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m32s (x21 over 9m58s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-442811/192.168.49.2
	Start Time:       Sun, 07 Dec 2025 22:45:59 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vt2bl (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-vt2bl:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m54s                   default-scheduler  Successfully assigned default/sp-pod to functional-442811
	  Normal   Pulling    7m6s (x5 over 9m54s)    kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     7m5s (x5 over 9m53s)    kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     7m5s (x5 over 9m53s)    kubelet            Error: ErrImagePull
	  Normal   BackOff    4m51s (x21 over 9m53s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     4m51s (x21 over 9m53s)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-5565989548-6kf2d" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-chtwb" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-442811 describe pod busybox-mount hello-node-5758569b79-6bwdx hello-node-connect-9f67c86d4-gldsc mysql-844cf969f6-zm2lh nginx-svc sp-pod dashboard-metrics-scraper-5565989548-6kf2d kubernetes-dashboard-b84665fb8-chtwb: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (602.53s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (240.65s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-442811 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [df055e75-dc3b-436d-9b15-60ec788da8a6] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
functional_test_tunnel_test.go:216: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup: pod "run=nginx-svc" failed to start within 4m0s: context deadline exceeded ****
functional_test_tunnel_test.go:216: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-442811 -n functional-442811
functional_test_tunnel_test.go:216: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup: showing logs for failed pods as of 2025-12-07 22:49:54.409377851 +0000 UTC m=+1224.695430435
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-442811 describe po nginx-svc -n default
functional_test_tunnel_test.go:216: (dbg) kubectl --context functional-442811 describe po nginx-svc -n default:
Name:             nginx-svc
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-442811/192.168.49.2
Start Time:       Sun, 07 Dec 2025 22:45:54 +0000
Labels:           run=nginx-svc
Annotations:      <none>
Status:           Pending
IP:               10.244.0.9
IPs:
IP:  10.244.0.9
Containers:
nginx:
Container ID:   
Image:          docker.io/nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-r8xnp (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-r8xnp:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  4m                    default-scheduler  Successfully assigned default/nginx-svc to functional-442811
Warning  Failed     2m26s                 kubelet            Failed to pull image "docker.io/nginx:alpine": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    54s (x5 over 4m)      kubelet            Pulling image "docker.io/nginx:alpine"
Warning  Failed     53s (x4 over 3m59s)   kubelet            Failed to pull image "docker.io/nginx:alpine": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     53s (x5 over 3m59s)   kubelet            Error: ErrImagePull
Normal   BackOff    13s (x14 over 3m59s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
Warning  Failed     13s (x14 over 3m59s)  kubelet            Error: ImagePullBackOff
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-442811 logs nginx-svc -n default
functional_test_tunnel_test.go:216: (dbg) Non-zero exit: kubectl --context functional-442811 logs nginx-svc -n default: exit status 1 (69.703717ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx-svc" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:216: kubectl --context functional-442811 logs nginx-svc -n default: exit status 1
functional_test_tunnel_test.go:217: wait: run=nginx-svc within 4m0s: context deadline exceeded
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (240.65s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (99.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
I1207 22:50:04.547942  397166 retry.go:31] will retry after 2.887493285s: Temporary Error: Get "http://10.110.150.78": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I1207 22:50:17.437265  397166 retry.go:31] will retry after 5.420472489s: Temporary Error: Get "http://10.110.150.78": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I1207 22:50:32.858474  397166 retry.go:31] will retry after 8.144422745s: Temporary Error: Get "http://10.110.150.78": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
E1207 22:50:33.566104  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/functional-304107/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I1207 22:50:51.004317  397166 retry.go:31] will retry after 9.972122896s: Temporary Error: Get "http://10.110.150.78": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I1207 22:51:10.977983  397166 retry.go:31] will retry after 12.630370539s: Temporary Error: Get "http://10.110.150.78": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:288: failed to hit nginx at "http://10.110.150.78": Temporary Error: Get "http://10.110.150.78": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-442811 get svc nginx-svc
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
NAME        TYPE           CLUSTER-IP       EXTERNAL-IP      PORT(S)        AGE
nginx-svc   LoadBalancer   10.100.134.119   10.100.134.119   80:30826/TCP   5m39s
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (99.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (600.6s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-442811 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-442811 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-5758569b79-6bwdx" [bb6e8c71-aaa3-4a46-9946-a5ac8718a889] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-442811 -n functional-442811
functional_test.go:1460: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-12-07 23:01:34.235542067 +0000 UTC m=+1924.521594637
functional_test.go:1460: (dbg) Run:  kubectl --context functional-442811 describe po hello-node-5758569b79-6bwdx -n default
functional_test.go:1460: (dbg) kubectl --context functional-442811 describe po hello-node-5758569b79-6bwdx -n default:
Name:             hello-node-5758569b79-6bwdx
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-442811/192.168.49.2
Start Time:       Sun, 07 Dec 2025 22:51:33 +0000
Labels:           app=hello-node
pod-template-hash=5758569b79
Annotations:      <none>
Status:           Pending
IP:               10.244.0.12
IPs:
IP:           10.244.0.12
Controlled By:  ReplicaSet/hello-node-5758569b79
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vq8rd (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-vq8rd:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-5758569b79-6bwdx to functional-442811
Normal   Pulling    6m55s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m54s (x5 over 9m59s)   kubelet            Failed to pull image "kicbase/echo-server": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     6m54s (x5 over 9m59s)   kubelet            Error: ErrImagePull
Warning  Failed     4m54s (x20 over 9m58s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m40s (x21 over 9m58s)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-442811 logs hello-node-5758569b79-6bwdx -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-442811 logs hello-node-5758569b79-6bwdx -n default: exit status 1 (72.21222ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-5758569b79-6bwdx" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-442811 logs hello-node-5758569b79-6bwdx -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (600.60s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.54s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-442811 service --namespace=default --https --url hello-node: exit status 115 (542.716813ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:30479
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-442811 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.54s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.54s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-442811 service hello-node --url --format={{.IP}}: exit status 115 (541.468493ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-442811 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.54s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.54s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-442811 service hello-node --url: exit status 115 (543.013441ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:30479
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-442811 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30479
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.54s)

                                                
                                    

Test pass (394/434)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 11.49
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.23
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.34.2/json-events 8.93
13 TestDownloadOnly/v1.34.2/preload-exists 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.07
18 TestDownloadOnly/v1.34.2/DeleteAll 0.23
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.15
21 TestDownloadOnly/v1.35.0-beta.0/json-events 8.88
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.08
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 0.23
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.15
29 TestDownloadOnlyKic 0.42
30 TestBinaryMirror 0.84
31 TestOffline 94.18
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
36 TestAddons/Setup 122.17
38 TestAddons/serial/Volcano 41.09
40 TestAddons/serial/GCPAuth/Namespaces 0.12
41 TestAddons/serial/GCPAuth/FakeCredentials 9.49
44 TestAddons/parallel/Registry 15.78
45 TestAddons/parallel/RegistryCreds 0.64
46 TestAddons/parallel/Ingress 21.39
47 TestAddons/parallel/InspektorGadget 10.86
48 TestAddons/parallel/MetricsServer 5.66
50 TestAddons/parallel/CSI 53.05
51 TestAddons/parallel/Headlamp 16.58
52 TestAddons/parallel/CloudSpanner 5.46
53 TestAddons/parallel/LocalPath 54.59
54 TestAddons/parallel/NvidiaDevicePlugin 5.43
55 TestAddons/parallel/Yakd 10.73
56 TestAddons/parallel/AmdGpuDevicePlugin 5.44
57 TestAddons/StoppedEnableDisable 11.22
58 TestCertOptions 28.87
59 TestCertExpiration 240.08
60 TestDockerFlags 29.44
61 TestForceSystemdFlag 29.67
62 TestForceSystemdEnv 30.62
67 TestErrorSpam/setup 21.64
68 TestErrorSpam/start 0.68
69 TestErrorSpam/status 0.95
70 TestErrorSpam/pause 1.24
71 TestErrorSpam/unpause 1.32
72 TestErrorSpam/stop 11.05
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 64.12
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 38.69
79 TestFunctional/serial/KubeContext 0.05
80 TestFunctional/serial/KubectlGetPods 0.06
83 TestFunctional/serial/CacheCmd/cache/add_remote 2.46
84 TestFunctional/serial/CacheCmd/cache/add_local 1.42
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
86 TestFunctional/serial/CacheCmd/cache/list 0.06
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.36
89 TestFunctional/serial/CacheCmd/cache/delete 0.13
90 TestFunctional/serial/MinikubeKubectlCmd 0.12
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
92 TestFunctional/serial/ExtraConfig 40.43
93 TestFunctional/serial/ComponentHealth 0.07
94 TestFunctional/serial/LogsCmd 1.02
95 TestFunctional/serial/LogsFileCmd 1.05
96 TestFunctional/serial/InvalidService 3.99
98 TestFunctional/parallel/ConfigCmd 0.46
100 TestFunctional/parallel/DryRun 0.45
101 TestFunctional/parallel/InternationalLanguage 0.17
102 TestFunctional/parallel/StatusCmd 0.98
106 TestFunctional/parallel/ServiceCmdConnect 12.52
107 TestFunctional/parallel/AddonsCmd 0.17
108 TestFunctional/parallel/PersistentVolumeClaim 36.91
110 TestFunctional/parallel/SSHCmd 0.57
111 TestFunctional/parallel/CpCmd 1.65
112 TestFunctional/parallel/MySQL 23.78
113 TestFunctional/parallel/FileSync 0.32
114 TestFunctional/parallel/CertSync 1.79
118 TestFunctional/parallel/NodeLabels 0.06
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.3
122 TestFunctional/parallel/License 0.36
123 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
125 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.54
126 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
127 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
128 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
129 TestFunctional/parallel/ImageCommands/ImageBuild 3.41
130 TestFunctional/parallel/ImageCommands/Setup 1.77
131 TestFunctional/parallel/Version/short 0.07
132 TestFunctional/parallel/Version/components 0.5
133 TestFunctional/parallel/ProfileCmd/profile_not_create 0.47
134 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
136 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.23
137 TestFunctional/parallel/ProfileCmd/profile_list 0.41
138 TestFunctional/parallel/ProfileCmd/profile_json_output 0.41
139 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.98
140 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.84
141 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.65
142 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.31
143 TestFunctional/parallel/ImageCommands/ImageRemove 0.45
144 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.6
145 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.4
146 TestFunctional/parallel/ServiceCmd/DeployApp 12.17
147 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
148 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
152 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
153 TestFunctional/parallel/DockerEnv/bash 0.97
154 TestFunctional/parallel/UpdateContextCmd/no_changes 0.16
155 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.16
156 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.15
157 TestFunctional/parallel/MountCmd/any-port 18.03
158 TestFunctional/parallel/ServiceCmd/List 0.97
159 TestFunctional/parallel/ServiceCmd/JSONOutput 0.95
160 TestFunctional/parallel/ServiceCmd/HTTPS 0.57
161 TestFunctional/parallel/ServiceCmd/Format 0.59
162 TestFunctional/parallel/ServiceCmd/URL 0.6
163 TestFunctional/parallel/MountCmd/specific-port 1.98
164 TestFunctional/parallel/MountCmd/VerifyCleanup 1.82
165 TestFunctional/delete_echo-server_images 0.04
166 TestFunctional/delete_my-image_image 0.02
167 TestFunctional/delete_minikube_cached_images 0.02
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 58.25
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
174 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 40.69
175 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.04
176 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 0.06
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 2.39
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 1.36
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.07
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.07
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.29
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 1.39
185 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.13
186 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 0.12
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 0.12
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig 41.4
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth 0.07
190 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 1.03
191 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 1.03
192 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 4.16
194 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 0.46
196 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 0.39
197 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.17
198 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 0.96
203 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.15
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 0.61
207 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 1.87
209 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.29
210 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 1.89
214 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 0.08
216 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.31
218 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 0.32
219 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.06
220 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 0.49
221 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 0.22
222 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 0.23
223 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 0.22
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 0.22
225 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 3.38
226 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 0.84
227 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv/bash 1.11
228 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.15
229 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.15
230 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.15
231 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 1.13
232 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 0.89
234 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.43
235 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 1.64
236 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0
239 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 0.32
240 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove 0.45
241 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 0.6
242 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 0.36
247 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0.11
249 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.42
250 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.4
251 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.4
252 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port 7.48
253 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port 1.98
254 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup 1.93
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 1.71
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 1.7
260 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.04
261 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.02
262 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.02
266 TestMultiControlPlane/serial/StartCluster 123.67
267 TestMultiControlPlane/serial/DeployApp 5.97
268 TestMultiControlPlane/serial/PingHostFromPods 1.27
269 TestMultiControlPlane/serial/AddWorkerNode 34.08
270 TestMultiControlPlane/serial/NodeLabels 0.07
271 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.91
272 TestMultiControlPlane/serial/CopyFile 17.66
273 TestMultiControlPlane/serial/StopSecondaryNode 11.68
274 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.74
275 TestMultiControlPlane/serial/RestartSecondaryNode 37.55
276 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.91
277 TestMultiControlPlane/serial/RestartClusterKeepsNodes 150.43
278 TestMultiControlPlane/serial/DeleteSecondaryNode 9.7
279 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.71
280 TestMultiControlPlane/serial/StopCluster 32.65
281 TestMultiControlPlane/serial/RestartCluster 72.35
282 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.71
283 TestMultiControlPlane/serial/AddSecondaryNode 53.5
284 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.92
287 TestImageBuild/serial/Setup 23.28
288 TestImageBuild/serial/NormalBuild 1.14
289 TestImageBuild/serial/BuildWithBuildArg 0.69
290 TestImageBuild/serial/BuildWithDockerIgnore 0.49
291 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.52
296 TestJSONOutput/start/Command 63.03
297 TestJSONOutput/start/Audit 0
299 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
300 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
302 TestJSONOutput/pause/Command 0.51
303 TestJSONOutput/pause/Audit 0
305 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
306 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
308 TestJSONOutput/unpause/Command 0.48
309 TestJSONOutput/unpause/Audit 0
311 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
312 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
314 TestJSONOutput/stop/Command 10.94
315 TestJSONOutput/stop/Audit 0
317 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
318 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
319 TestErrorJSONOutput 0.24
321 TestKicCustomNetwork/create_custom_network 25.69
322 TestKicCustomNetwork/use_default_bridge_network 25.44
323 TestKicExistingNetwork 25.66
324 TestKicCustomSubnet 23.21
325 TestKicStaticIP 26.7
326 TestMainNoArgs 0.06
327 TestMinikubeProfile 53.11
330 TestMountStart/serial/StartWithMountFirst 6.49
331 TestMountStart/serial/VerifyMountFirst 0.27
332 TestMountStart/serial/StartWithMountSecond 9.54
333 TestMountStart/serial/VerifyMountSecond 0.27
334 TestMountStart/serial/DeleteFirst 1.53
335 TestMountStart/serial/VerifyMountPostDelete 0.27
336 TestMountStart/serial/Stop 1.25
337 TestMountStart/serial/RestartStopped 9.36
338 TestMountStart/serial/VerifyMountPostStop 0.27
341 TestMultiNode/serial/FreshStart2Nodes 75.57
342 TestMultiNode/serial/DeployApp2Nodes 5.23
343 TestMultiNode/serial/PingHostFrom2Pods 0.9
344 TestMultiNode/serial/AddNode 33.83
345 TestMultiNode/serial/MultiNodeLabels 0.06
346 TestMultiNode/serial/ProfileList 0.68
347 TestMultiNode/serial/CopyFile 9.95
348 TestMultiNode/serial/StopNode 2.29
349 TestMultiNode/serial/StartAfterStop 8.71
350 TestMultiNode/serial/RestartKeepsNodes 69.69
351 TestMultiNode/serial/DeleteNode 5.36
352 TestMultiNode/serial/StopMultiNode 21.99
353 TestMultiNode/serial/RestartMultiNode 49.59
354 TestMultiNode/serial/ValidateNameConflict 26.63
359 TestPreload 104.85
361 TestScheduledStopUnix 98.79
362 TestSkaffold 84.06
364 TestInsufficientStorage 12.26
365 TestRunningBinaryUpgrade 345.75
367 TestKubernetesUpgrade 322.11
368 TestMissingContainerUpgrade 114.34
370 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
371 TestNoKubernetes/serial/StartWithK8s 41.64
372 TestNoKubernetes/serial/StartWithStopK8s 16.68
373 TestNoKubernetes/serial/Start 8.81
374 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
375 TestNoKubernetes/serial/VerifyK8sNotRunning 0.3
376 TestNoKubernetes/serial/ProfileList 6.72
377 TestNoKubernetes/serial/Stop 1.32
378 TestNoKubernetes/serial/StartNoArgs 8.59
379 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.28
391 TestStoppedBinaryUpgrade/Setup 3.42
392 TestStoppedBinaryUpgrade/Upgrade 297.5
401 TestPause/serial/Start 70.57
402 TestNetworkPlugins/group/auto/Start 66.23
403 TestPause/serial/SecondStartNoReconfiguration 43.01
404 TestStoppedBinaryUpgrade/MinikubeLogs 1.13
405 TestNetworkPlugins/group/auto/KubeletFlags 0.32
406 TestNetworkPlugins/group/kindnet/Start 54.76
407 TestNetworkPlugins/group/auto/NetCatPod 12.21
408 TestNetworkPlugins/group/calico/Start 62.17
409 TestPause/serial/Pause 0.55
410 TestPause/serial/VerifyStatus 0.36
411 TestPause/serial/Unpause 0.58
412 TestPause/serial/PauseAgain 0.64
413 TestPause/serial/DeletePaused 2.37
414 TestNetworkPlugins/group/auto/DNS 0.16
415 TestNetworkPlugins/group/auto/Localhost 0.13
416 TestNetworkPlugins/group/auto/HairPin 0.13
417 TestPause/serial/VerifyDeletedResources 0.81
418 TestNetworkPlugins/group/custom-flannel/Start 47.13
419 TestNetworkPlugins/group/false/Start 64.07
420 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
421 TestNetworkPlugins/group/kindnet/KubeletFlags 0.33
422 TestNetworkPlugins/group/kindnet/NetCatPod 10.2
423 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.36
424 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.23
425 TestNetworkPlugins/group/calico/ControllerPod 6.01
426 TestNetworkPlugins/group/calico/KubeletFlags 0.3
427 TestNetworkPlugins/group/calico/NetCatPod 9.18
428 TestNetworkPlugins/group/kindnet/DNS 0.17
429 TestNetworkPlugins/group/kindnet/Localhost 0.16
430 TestNetworkPlugins/group/kindnet/HairPin 0.14
431 TestNetworkPlugins/group/custom-flannel/DNS 0.17
432 TestNetworkPlugins/group/custom-flannel/Localhost 0.13
433 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
434 TestNetworkPlugins/group/calico/DNS 0.16
435 TestNetworkPlugins/group/calico/Localhost 0.13
436 TestNetworkPlugins/group/calico/HairPin 0.14
437 TestNetworkPlugins/group/enable-default-cni/Start 68.01
438 TestNetworkPlugins/group/flannel/Start 42.5
439 TestNetworkPlugins/group/false/KubeletFlags 0.39
440 TestNetworkPlugins/group/false/NetCatPod 10.34
441 TestNetworkPlugins/group/bridge/Start 68.46
442 TestNetworkPlugins/group/false/DNS 0.16
443 TestNetworkPlugins/group/false/Localhost 0.14
444 TestNetworkPlugins/group/false/HairPin 0.15
445 TestNetworkPlugins/group/kubenet/Start 67.24
446 TestNetworkPlugins/group/flannel/ControllerPod 6.01
447 TestNetworkPlugins/group/flannel/KubeletFlags 0.32
448 TestNetworkPlugins/group/flannel/NetCatPod 10.18
449 TestNetworkPlugins/group/flannel/DNS 0.17
450 TestNetworkPlugins/group/flannel/Localhost 0.14
451 TestNetworkPlugins/group/flannel/HairPin 0.13
452 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.32
453 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.21
454 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
455 TestNetworkPlugins/group/enable-default-cni/Localhost 0.12
456 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
457 TestNetworkPlugins/group/bridge/KubeletFlags 0.33
458 TestNetworkPlugins/group/bridge/NetCatPod 8.33
460 TestStartStop/group/old-k8s-version/serial/FirstStart 43.03
461 TestNetworkPlugins/group/bridge/DNS 0.19
462 TestNetworkPlugins/group/bridge/Localhost 0.14
463 TestNetworkPlugins/group/bridge/HairPin 0.12
465 TestStartStop/group/no-preload/serial/FirstStart 42.74
466 TestNetworkPlugins/group/kubenet/KubeletFlags 0.36
467 TestNetworkPlugins/group/kubenet/NetCatPod 12.24
469 TestStartStop/group/embed-certs/serial/FirstStart 69.61
470 TestNetworkPlugins/group/kubenet/DNS 0.22
471 TestNetworkPlugins/group/kubenet/Localhost 0.17
472 TestNetworkPlugins/group/kubenet/HairPin 0.19
473 TestStartStop/group/old-k8s-version/serial/DeployApp 8.31
474 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.01
475 TestStartStop/group/old-k8s-version/serial/Stop 11.85
476 TestStartStop/group/no-preload/serial/DeployApp 10.3
478 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 72.11
479 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.25
480 TestStartStop/group/old-k8s-version/serial/SecondStart 47.41
481 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.9
482 TestStartStop/group/no-preload/serial/Stop 11.12
483 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.27
484 TestStartStop/group/no-preload/serial/SecondStart 49.73
485 TestStartStop/group/embed-certs/serial/DeployApp 10.27
486 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.82
487 TestStartStop/group/embed-certs/serial/Stop 11.05
488 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
489 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
490 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.22
491 TestStartStop/group/embed-certs/serial/SecondStart 51.27
492 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.26
493 TestStartStop/group/old-k8s-version/serial/Pause 2.64
495 TestStartStop/group/newest-cni/serial/FirstStart 26.83
496 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
497 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.29
498 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.1
499 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.88
500 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
501 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.15
502 TestStartStop/group/no-preload/serial/Pause 2.91
503 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.29
504 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 50.86
505 TestStartStop/group/newest-cni/serial/DeployApp 0
506 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.84
507 TestStartStop/group/newest-cni/serial/Stop 10.99
508 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
509 TestStartStop/group/newest-cni/serial/SecondStart 13.28
510 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
511 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
512 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
513 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
514 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.27
515 TestStartStop/group/newest-cni/serial/Pause 2.69
516 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.26
517 TestStartStop/group/embed-certs/serial/Pause 2.84
518 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
519 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
520 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
521 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.41
x
+
TestDownloadOnly/v1.28.0/json-events (11.49s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-558493 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-558493 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (11.493607735s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (11.49s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1207 22:29:41.246683  397166 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
I1207 22:29:41.246776  397166 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-393577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-558493
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-558493: exit status 85 (76.023499ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                     ARGS                                                                                      │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-558493 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker │ download-only-558493 │ jenkins │ v1.37.0 │ 07 Dec 25 22:29 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/07 22:29:29
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1207 22:29:29.807756  397178 out.go:360] Setting OutFile to fd 1 ...
	I1207 22:29:29.807850  397178 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:29:29.807858  397178 out.go:374] Setting ErrFile to fd 2...
	I1207 22:29:29.807862  397178 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:29:29.808097  397178 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-393577/.minikube/bin
	W1207 22:29:29.808259  397178 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22054-393577/.minikube/config/config.json: open /home/jenkins/minikube-integration/22054-393577/.minikube/config/config.json: no such file or directory
	I1207 22:29:29.808804  397178 out.go:368] Setting JSON to true
	I1207 22:29:29.809781  397178 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":4313,"bootTime":1765142257,"procs":234,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 22:29:29.809844  397178 start.go:143] virtualization: kvm guest
	I1207 22:29:29.813169  397178 out.go:99] [download-only-558493] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1207 22:29:29.813307  397178 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22054-393577/.minikube/cache/preloaded-tarball: no such file or directory
	I1207 22:29:29.813340  397178 notify.go:221] Checking for updates...
	I1207 22:29:29.814498  397178 out.go:171] MINIKUBE_LOCATION=22054
	I1207 22:29:29.815992  397178 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 22:29:29.817574  397178 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22054-393577/kubeconfig
	I1207 22:29:29.818593  397178 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-393577/.minikube
	I1207 22:29:29.819661  397178 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1207 22:29:29.821713  397178 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1207 22:29:29.821909  397178 driver.go:422] Setting default libvirt URI to qemu:///system
	I1207 22:29:29.846245  397178 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1207 22:29:29.846330  397178 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 22:29:29.899044  397178 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:63 SystemTime:2025-12-07 22:29:29.889727797 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 22:29:29.899149  397178 docker.go:319] overlay module found
	I1207 22:29:29.900741  397178 out.go:99] Using the docker driver based on user configuration
	I1207 22:29:29.900765  397178 start.go:309] selected driver: docker
	I1207 22:29:29.900772  397178 start.go:927] validating driver "docker" against <nil>
	I1207 22:29:29.900855  397178 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 22:29:29.958741  397178 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:63 SystemTime:2025-12-07 22:29:29.948974331 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 22:29:29.959179  397178 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1207 22:29:29.960030  397178 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1207 22:29:29.960282  397178 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1207 22:29:29.962226  397178 out.go:171] Using Docker driver with root privileges
	I1207 22:29:29.963378  397178 cni.go:84] Creating CNI manager for ""
	I1207 22:29:29.963456  397178 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1207 22:29:29.963490  397178 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1207 22:29:29.963583  397178 start.go:353] cluster config:
	{Name:download-only-558493 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-558493 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 22:29:29.964848  397178 out.go:99] Starting "download-only-558493" primary control-plane node in "download-only-558493" cluster
	I1207 22:29:29.964910  397178 cache.go:134] Beginning downloading kic base image for docker with docker
	I1207 22:29:29.966066  397178 out.go:99] Pulling base image v0.0.48-1764843390-22032 ...
	I1207 22:29:29.966104  397178 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I1207 22:29:29.966145  397178 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1207 22:29:29.983085  397178 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 to local cache
	I1207 22:29:29.983288  397178 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local cache directory
	I1207 22:29:29.983370  397178 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 to local cache
	I1207 22:29:30.309112  397178 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
	I1207 22:29:30.309166  397178 cache.go:65] Caching tarball of preloaded images
	I1207 22:29:30.309347  397178 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I1207 22:29:30.311103  397178 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1207 22:29:30.311129  397178 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4 from gcs api...
	I1207 22:29:30.409964  397178 preload.go:295] Got checksum from GCS API "8a955be835827bc584bcce0658a7fcc9"
	I1207 22:29:30.410084  397178 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4?checksum=md5:8a955be835827bc584bcce0658a7fcc9 -> /home/jenkins/minikube-integration/22054-393577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
	I1207 22:29:36.415386  397178 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 as a tarball
	
	
	* The control-plane node download-only-558493 host does not exist
	  To start a cluster, run: "minikube start -p download-only-558493"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-558493
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (8.93s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-793344 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-793344 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=docker --driver=docker  --container-runtime=docker: (8.928395751s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (8.93s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1207 22:29:50.637685  397166 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
I1207 22:29:50.637735  397166 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-393577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-793344
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-793344: exit status 85 (73.189461ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                     ARGS                                                                                      │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-558493 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker │ download-only-558493 │ jenkins │ v1.37.0 │ 07 Dec 25 22:29 UTC │                     │
	│ delete  │ --all                                                                                                                                                                         │ minikube             │ jenkins │ v1.37.0 │ 07 Dec 25 22:29 UTC │ 07 Dec 25 22:29 UTC │
	│ delete  │ -p download-only-558493                                                                                                                                                       │ download-only-558493 │ jenkins │ v1.37.0 │ 07 Dec 25 22:29 UTC │ 07 Dec 25 22:29 UTC │
	│ start   │ -o=json --download-only -p download-only-793344 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=docker --driver=docker  --container-runtime=docker │ download-only-793344 │ jenkins │ v1.37.0 │ 07 Dec 25 22:29 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/07 22:29:41
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1207 22:29:41.762907  397544 out.go:360] Setting OutFile to fd 1 ...
	I1207 22:29:41.763142  397544 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:29:41.763150  397544 out.go:374] Setting ErrFile to fd 2...
	I1207 22:29:41.763155  397544 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:29:41.763355  397544 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-393577/.minikube/bin
	I1207 22:29:41.763805  397544 out.go:368] Setting JSON to true
	I1207 22:29:41.765156  397544 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":4325,"bootTime":1765142257,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 22:29:41.765388  397544 start.go:143] virtualization: kvm guest
	I1207 22:29:41.767227  397544 out.go:99] [download-only-793344] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1207 22:29:41.767378  397544 notify.go:221] Checking for updates...
	I1207 22:29:41.768411  397544 out.go:171] MINIKUBE_LOCATION=22054
	I1207 22:29:41.769688  397544 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 22:29:41.771182  397544 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22054-393577/kubeconfig
	I1207 22:29:41.772346  397544 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-393577/.minikube
	I1207 22:29:41.773704  397544 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1207 22:29:41.775949  397544 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1207 22:29:41.776175  397544 driver.go:422] Setting default libvirt URI to qemu:///system
	I1207 22:29:41.799576  397544 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1207 22:29:41.799743  397544 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 22:29:41.853219  397544 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-07 22:29:41.843699266 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 22:29:41.853332  397544 docker.go:319] overlay module found
	I1207 22:29:41.854848  397544 out.go:99] Using the docker driver based on user configuration
	I1207 22:29:41.854880  397544 start.go:309] selected driver: docker
	I1207 22:29:41.854887  397544 start.go:927] validating driver "docker" against <nil>
	I1207 22:29:41.854992  397544 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 22:29:41.911448  397544 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-07 22:29:41.901810811 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 22:29:41.911640  397544 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1207 22:29:41.912129  397544 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1207 22:29:41.912289  397544 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1207 22:29:41.914127  397544 out.go:171] Using Docker driver with root privileges
	I1207 22:29:41.915441  397544 cni.go:84] Creating CNI manager for ""
	I1207 22:29:41.915525  397544 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1207 22:29:41.915539  397544 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1207 22:29:41.915640  397544 start.go:353] cluster config:
	{Name:download-only-793344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:download-only-793344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 22:29:41.916960  397544 out.go:99] Starting "download-only-793344" primary control-plane node in "download-only-793344" cluster
	I1207 22:29:41.916982  397544 cache.go:134] Beginning downloading kic base image for docker with docker
	I1207 22:29:41.918258  397544 out.go:99] Pulling base image v0.0.48-1764843390-22032 ...
	I1207 22:29:41.918298  397544 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1207 22:29:41.918385  397544 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1207 22:29:41.934902  397544 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 to local cache
	I1207 22:29:41.935090  397544 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local cache directory
	I1207 22:29:41.935112  397544 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local cache directory, skipping pull
	I1207 22:29:41.935117  397544 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in cache, skipping pull
	I1207 22:29:41.935130  397544 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 as a tarball
	I1207 22:29:42.272952  397544 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4
	I1207 22:29:42.272985  397544 cache.go:65] Caching tarball of preloaded images
	I1207 22:29:42.273187  397544 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1207 22:29:42.274853  397544 out.go:99] Downloading Kubernetes v1.34.2 preload ...
	I1207 22:29:42.274871  397544 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4 from gcs api...
	I1207 22:29:42.372101  397544 preload.go:295] Got checksum from GCS API "cafa99c47d4d00983a02f051962239e0"
	I1207 22:29:42.372172  397544 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4?checksum=md5:cafa99c47d4d00983a02f051962239e0 -> /home/jenkins/minikube-integration/22054-393577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-793344 host does not exist
	  To start a cluster, run: "minikube start -p download-only-793344"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-793344
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (8.88s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-116673 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-116673 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (8.884451078s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (8.88s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1207 22:29:59.980156  397166 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
I1207 22:29:59.980208  397166 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-393577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-116673
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-116673: exit status 85 (77.990167ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                         ARGS                                                                                         │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-558493 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker        │ download-only-558493 │ jenkins │ v1.37.0 │ 07 Dec 25 22:29 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                │ minikube             │ jenkins │ v1.37.0 │ 07 Dec 25 22:29 UTC │ 07 Dec 25 22:29 UTC │
	│ delete  │ -p download-only-558493                                                                                                                                                              │ download-only-558493 │ jenkins │ v1.37.0 │ 07 Dec 25 22:29 UTC │ 07 Dec 25 22:29 UTC │
	│ start   │ -o=json --download-only -p download-only-793344 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=docker --driver=docker  --container-runtime=docker        │ download-only-793344 │ jenkins │ v1.37.0 │ 07 Dec 25 22:29 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                │ minikube             │ jenkins │ v1.37.0 │ 07 Dec 25 22:29 UTC │ 07 Dec 25 22:29 UTC │
	│ delete  │ -p download-only-793344                                                                                                                                                              │ download-only-793344 │ jenkins │ v1.37.0 │ 07 Dec 25 22:29 UTC │ 07 Dec 25 22:29 UTC │
	│ start   │ -o=json --download-only -p download-only-116673 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=docker --driver=docker  --container-runtime=docker │ download-only-116673 │ jenkins │ v1.37.0 │ 07 Dec 25 22:29 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/07 22:29:51
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1207 22:29:51.150331  397913 out.go:360] Setting OutFile to fd 1 ...
	I1207 22:29:51.150639  397913 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:29:51.150650  397913 out.go:374] Setting ErrFile to fd 2...
	I1207 22:29:51.150657  397913 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:29:51.150864  397913 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-393577/.minikube/bin
	I1207 22:29:51.151343  397913 out.go:368] Setting JSON to true
	I1207 22:29:51.152294  397913 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":4334,"bootTime":1765142257,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 22:29:51.152352  397913 start.go:143] virtualization: kvm guest
	I1207 22:29:51.154108  397913 out.go:99] [download-only-116673] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1207 22:29:51.154307  397913 notify.go:221] Checking for updates...
	I1207 22:29:51.155461  397913 out.go:171] MINIKUBE_LOCATION=22054
	I1207 22:29:51.156838  397913 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 22:29:51.158189  397913 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22054-393577/kubeconfig
	I1207 22:29:51.162243  397913 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-393577/.minikube
	I1207 22:29:51.163582  397913 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1207 22:29:51.166288  397913 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1207 22:29:51.166567  397913 driver.go:422] Setting default libvirt URI to qemu:///system
	I1207 22:29:51.190024  397913 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1207 22:29:51.190111  397913 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 22:29:51.243652  397913 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-07 22:29:51.233712723 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 22:29:51.243765  397913 docker.go:319] overlay module found
	I1207 22:29:51.245235  397913 out.go:99] Using the docker driver based on user configuration
	I1207 22:29:51.245276  397913 start.go:309] selected driver: docker
	I1207 22:29:51.245283  397913 start.go:927] validating driver "docker" against <nil>
	I1207 22:29:51.245364  397913 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 22:29:51.301213  397913 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-07 22:29:51.291697163 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 22:29:51.301396  397913 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1207 22:29:51.301905  397913 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1207 22:29:51.302100  397913 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1207 22:29:51.303742  397913 out.go:171] Using Docker driver with root privileges
	I1207 22:29:51.304930  397913 cni.go:84] Creating CNI manager for ""
	I1207 22:29:51.305000  397913 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1207 22:29:51.305012  397913 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1207 22:29:51.305076  397913 start.go:353] cluster config:
	{Name:download-only-116673 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:download-only-116673 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 22:29:51.306321  397913 out.go:99] Starting "download-only-116673" primary control-plane node in "download-only-116673" cluster
	I1207 22:29:51.306337  397913 cache.go:134] Beginning downloading kic base image for docker with docker
	I1207 22:29:51.307464  397913 out.go:99] Pulling base image v0.0.48-1764843390-22032 ...
	I1207 22:29:51.307492  397913 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1207 22:29:51.307607  397913 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1207 22:29:51.324348  397913 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 to local cache
	I1207 22:29:51.324475  397913 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local cache directory
	I1207 22:29:51.324491  397913 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local cache directory, skipping pull
	I1207 22:29:51.324496  397913 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in cache, skipping pull
	I1207 22:29:51.324511  397913 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 as a tarball
	I1207 22:29:51.646168  397913 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
	I1207 22:29:51.646200  397913 cache.go:65] Caching tarball of preloaded images
	I1207 22:29:51.646394  397913 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1207 22:29:51.648208  397913 out.go:99] Downloading Kubernetes v1.35.0-beta.0 preload ...
	I1207 22:29:51.648236  397913 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 from gcs api...
	I1207 22:29:51.745060  397913 preload.go:295] Got checksum from GCS API "7f0e1a4aaa3540d32279d04bf9728fae"
	I1207 22:29:51.745126  397913 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4?checksum=md5:7f0e1a4aaa3540d32279d04bf9728fae -> /home/jenkins/minikube-integration/22054-393577/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-116673 host does not exist
	  To start a cluster, run: "minikube start -p download-only-116673"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-116673
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.42s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-571075 --alsologtostderr --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "download-docker-571075" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-571075
--- PASS: TestDownloadOnlyKic (0.42s)

                                                
                                    
x
+
TestBinaryMirror (0.84s)

                                                
                                                
=== RUN   TestBinaryMirror
I1207 22:30:01.381053  397166 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-335725 --alsologtostderr --binary-mirror http://127.0.0.1:45389 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-335725" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-335725
--- PASS: TestBinaryMirror (0.84s)

                                                
                                    
x
+
TestOffline (94.18s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-207715 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-207715 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=docker: (1m29.911872603s)
helpers_test.go:175: Cleaning up "offline-docker-207715" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-207715
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-207715: (4.265677592s)
--- PASS: TestOffline (94.18s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-549698
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-549698: exit status 85 (65.568625ms)

                                                
                                                
-- stdout --
	* Profile "addons-549698" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-549698"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-549698
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-549698: exit status 85 (66.627105ms)

                                                
                                                
-- stdout --
	* Profile "addons-549698" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-549698"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (122.17s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-549698 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-549698 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m2.174342326s)
--- PASS: TestAddons/Setup (122.17s)

                                                
                                    
x
+
TestAddons/serial/Volcano (41.09s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:868: volcano-scheduler stabilized in 13.608693ms
addons_test.go:876: volcano-admission stabilized in 13.642721ms
addons_test.go:884: volcano-controller stabilized in 14.513851ms
addons_test.go:890: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-scheduler-76c996c8bf-g5rzs" [73b246f7-1ecb-441c-8a50-fbc399e3daa6] Running
addons_test.go:890: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.00398339s
addons_test.go:894: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-admission-6c447bd768-2vjnx" [ab24b7f9-6e07-433c-9fd8-5232a8720019] Running
addons_test.go:894: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003802107s
addons_test.go:898: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-controllers-6fd4f85cb8-v4w8k" [61f91c2b-cc06-4e55-a467-783f881d2d6e] Running
addons_test.go:898: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.004271916s
addons_test.go:903: (dbg) Run:  kubectl --context addons-549698 delete -n volcano-system job volcano-admission-init
addons_test.go:909: (dbg) Run:  kubectl --context addons-549698 create -f testdata/vcjob.yaml
addons_test.go:917: (dbg) Run:  kubectl --context addons-549698 get vcjob -n my-volcano
addons_test.go:935: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:352: "test-job-nginx-0" [64ba73e8-f5dd-4b8a-81d2-da7b16b0e8fa] Pending
helpers_test.go:352: "test-job-nginx-0" [64ba73e8-f5dd-4b8a-81d2-da7b16b0e8fa] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "test-job-nginx-0" [64ba73e8-f5dd-4b8a-81d2-da7b16b0e8fa] Running
addons_test.go:935: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 14.003559463s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-549698 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-549698 addons disable volcano --alsologtostderr -v=1: (11.748635262s)
--- PASS: TestAddons/serial/Volcano (41.09s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-549698 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-549698 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.49s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-549698 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-549698 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [8db5c714-461a-4839-8d28-10bd0a1d1d45] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [8db5c714-461a-4839-8d28-10bd0a1d1d45] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.004464426s
addons_test.go:694: (dbg) Run:  kubectl --context addons-549698 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-549698 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-549698 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.49s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.78s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 4.281269ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-w6xvb" [c4bb24bc-f880-454b-8f92-28e00d73c081] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003902068s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-ztxr8" [e82e3502-cd68-4974-9055-7d8a46e5659c] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003263444s
addons_test.go:392: (dbg) Run:  kubectl --context addons-549698 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-549698 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-549698 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.023424617s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-549698 ip
2025/12/07 22:33:19 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-549698 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.78s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.64s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.399707ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-549698
addons_test.go:332: (dbg) Run:  kubectl --context addons-549698 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-549698 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.64s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (21.39s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-549698 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-549698 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-549698 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [ce5e3508-4fc7-453e-a2b1-7a9d11dca2f1] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [ce5e3508-4fc7-453e-a2b1-7a9d11dca2f1] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.003787315s
I1207 22:33:31.329625  397166 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-549698 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-549698 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-549698 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-549698 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-549698 addons disable ingress-dns --alsologtostderr -v=1: (1.201003627s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-549698 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-549698 addons disable ingress --alsologtostderr -v=1: (7.77182255s)
--- PASS: TestAddons/parallel/Ingress (21.39s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.86s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-jxwbp" [34c97bc6-0b17-468c-a607-38f388392bc3] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004822316s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-549698 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-549698 addons disable inspektor-gadget --alsologtostderr -v=1: (5.849322308s)
--- PASS: TestAddons/parallel/InspektorGadget (10.86s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.66s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.471114ms
I1207 22:33:04.016818  397166 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1207 22:33:04.016845  397166 kapi.go:107] duration metric: took 4.029721ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-fskcj" [b224e94d-11c2-492f-9c6e-aae4cf77ed22] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004021571s
addons_test.go:463: (dbg) Run:  kubectl --context addons-549698 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-549698 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.66s)

                                                
                                    
x
+
TestAddons/parallel/CSI (53.05s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1207 22:33:04.012840  397166 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 4.043105ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-549698 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-549698 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-549698 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-549698 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-549698 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-549698 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-549698 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-549698 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-549698 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-549698 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-549698 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-549698 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-549698 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-549698 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [7fea1eb2-f9a9-4041-b5ca-d43b55903b83] Pending
helpers_test.go:352: "task-pv-pod" [7fea1eb2-f9a9-4041-b5ca-d43b55903b83] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [7fea1eb2-f9a9-4041-b5ca-d43b55903b83] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.003238842s
addons_test.go:572: (dbg) Run:  kubectl --context addons-549698 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-549698 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-549698 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-549698 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-549698 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-549698 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-549698 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-549698 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-549698 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-549698 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-549698 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-549698 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-549698 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-549698 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-549698 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-549698 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-549698 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-549698 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-549698 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-549698 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-549698 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-549698 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-549698 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [3fdaea08-895a-4cb3-b4d6-52b38dab4add] Pending
helpers_test.go:352: "task-pv-pod-restore" [3fdaea08-895a-4cb3-b4d6-52b38dab4add] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [3fdaea08-895a-4cb3-b4d6-52b38dab4add] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003223073s
addons_test.go:614: (dbg) Run:  kubectl --context addons-549698 delete pod task-pv-pod-restore
addons_test.go:614: (dbg) Done: kubectl --context addons-549698 delete pod task-pv-pod-restore: (1.006494864s)
addons_test.go:618: (dbg) Run:  kubectl --context addons-549698 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-549698 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-549698 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-549698 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-549698 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.493420515s)
--- PASS: TestAddons/parallel/CSI (53.05s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.58s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-549698 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-dfcdc64b-dftsf" [c09438c5-337f-4f53-8049-09e7200a2a61] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-dfcdc64b-dftsf" [c09438c5-337f-4f53-8049-09e7200a2a61] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.003408187s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-549698 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-549698 addons disable headlamp --alsologtostderr -v=1: (5.843206852s)
--- PASS: TestAddons/parallel/Headlamp (16.58s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.46s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-5bdddb765-lg98p" [b151a8f5-b237-469a-ab26-6fa895fc628b] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003109865s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-549698 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.46s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (54.59s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-549698 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-549698 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-549698 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-549698 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-549698 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-549698 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-549698 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-549698 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [469995f4-133f-434a-a512-4742976d7460] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [469995f4-133f-434a-a512-4742976d7460] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [469995f4-133f-434a-a512-4742976d7460] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.00295375s
addons_test.go:967: (dbg) Run:  kubectl --context addons-549698 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-549698 ssh "cat /opt/local-path-provisioner/pvc-e55843fd-3b8f-4332-88c9-a1f72b43b0e3_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-549698 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-549698 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-549698 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-549698 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.697372586s)
--- PASS: TestAddons/parallel/LocalPath (54.59s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.43s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-285qt" [5d39f418-232a-4db2-aef1-adaf89154225] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004320585s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-549698 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.43s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.73s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-btmz9" [0603aa55-caed-4b83-9751-7471ea4ef090] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004144491s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-549698 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-549698 addons disable yakd --alsologtostderr -v=1: (5.72596621s)
--- PASS: TestAddons/parallel/Yakd (10.73s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.44s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-l7b5n" [1c737294-5c1d-4548-b308-95566c2a3c92] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.003377972s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-549698 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (5.44s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.22s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-549698
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-549698: (10.927451239s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-549698
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-549698
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-549698
--- PASS: TestAddons/StoppedEnableDisable (11.22s)

                                                
                                    
x
+
TestCertOptions (28.87s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-849777 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
E1207 23:32:33.694791  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/skaffold-754134/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:32:49.706207  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/functional-304107/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-849777 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (25.951443226s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-849777 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-849777 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-849777 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-849777" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-849777
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-849777: (2.24859665s)
--- PASS: TestCertOptions (28.87s)

                                                
                                    
x
+
TestCertExpiration (240.08s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-358219 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=docker
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-358219 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=docker: (27.363903453s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-358219 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=docker
E1207 23:32:04.468964  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/addons-549698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-358219 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (30.370016419s)
helpers_test.go:175: Cleaning up "cert-expiration-358219" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-358219
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-358219: (2.340297049s)
--- PASS: TestCertExpiration (240.08s)

                                                
                                    
x
+
TestDockerFlags (29.44s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-584763 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-584763 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (26.513218094s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-584763 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-584763 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-584763" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-584763
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-584763: (2.289311624s)
--- PASS: TestDockerFlags (29.44s)

                                                
                                    
x
+
TestForceSystemdFlag (29.67s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-906098 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-906098 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (27.034614689s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-906098 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-906098" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-906098
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-906098: (2.268635584s)
--- PASS: TestForceSystemdFlag (29.67s)

                                                
                                    
x
+
TestForceSystemdEnv (30.62s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-516926 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-516926 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (28.004561849s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-516926 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-516926" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-516926
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-516926: (2.233099429s)
--- PASS: TestForceSystemdEnv (30.62s)

                                                
                                    
x
+
TestErrorSpam/setup (21.64s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-950586 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-950586 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-950586 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-950586 --driver=docker  --container-runtime=docker: (21.635081697s)
--- PASS: TestErrorSpam/setup (21.64s)

                                                
                                    
x
+
TestErrorSpam/start (0.68s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-950586 --log_dir /tmp/nospam-950586 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-950586 --log_dir /tmp/nospam-950586 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-950586 --log_dir /tmp/nospam-950586 start --dry-run
--- PASS: TestErrorSpam/start (0.68s)

                                                
                                    
x
+
TestErrorSpam/status (0.95s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-950586 --log_dir /tmp/nospam-950586 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-950586 --log_dir /tmp/nospam-950586 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-950586 --log_dir /tmp/nospam-950586 status
--- PASS: TestErrorSpam/status (0.95s)

                                                
                                    
x
+
TestErrorSpam/pause (1.24s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-950586 --log_dir /tmp/nospam-950586 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-950586 --log_dir /tmp/nospam-950586 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-950586 --log_dir /tmp/nospam-950586 pause
--- PASS: TestErrorSpam/pause (1.24s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.32s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-950586 --log_dir /tmp/nospam-950586 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-950586 --log_dir /tmp/nospam-950586 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-950586 --log_dir /tmp/nospam-950586 unpause
--- PASS: TestErrorSpam/unpause (1.32s)

                                                
                                    
x
+
TestErrorSpam/stop (11.05s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-950586 --log_dir /tmp/nospam-950586 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-950586 --log_dir /tmp/nospam-950586 stop: (10.826435567s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-950586 --log_dir /tmp/nospam-950586 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-950586 --log_dir /tmp/nospam-950586 stop
--- PASS: TestErrorSpam/stop (11.05s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22054-393577/.minikube/files/etc/test/nested/copy/397166/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (64.12s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-304107 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-304107 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (1m4.119643374s)
--- PASS: TestFunctional/serial/StartWithProxy (64.12s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (38.69s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1207 22:36:17.776246  397166 config.go:182] Loaded profile config "functional-304107": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-304107 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-304107 --alsologtostderr -v=8: (38.686748729s)
functional_test.go:678: soft start took 38.687663702s for "functional-304107" cluster.
I1207 22:36:56.464388  397166 config.go:182] Loaded profile config "functional-304107": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (38.69s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-304107 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.46s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.46s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.42s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-304107 /tmp/TestFunctionalserialCacheCmdcacheadd_local4083105935/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 cache add minikube-local-cache-test:functional-304107
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-304107 cache add minikube-local-cache-test:functional-304107: (1.074445315s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 cache delete minikube-local-cache-test:functional-304107
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-304107
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.42s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.36s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-304107 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (283.879081ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.36s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 kubectl -- --context functional-304107 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-304107 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (40.43s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-304107 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1207 22:37:04.469440  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/addons-549698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 22:37:04.475890  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/addons-549698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 22:37:04.487303  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/addons-549698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 22:37:04.508724  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/addons-549698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 22:37:04.550147  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/addons-549698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 22:37:04.631580  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/addons-549698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 22:37:04.793137  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/addons-549698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 22:37:05.114828  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/addons-549698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 22:37:05.756873  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/addons-549698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 22:37:07.038807  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/addons-549698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 22:37:09.601723  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/addons-549698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 22:37:14.723427  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/addons-549698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 22:37:24.965039  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/addons-549698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-304107 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (40.428337803s)
functional_test.go:776: restart took 40.428472014s for "functional-304107" cluster.
I1207 22:37:43.034540  397166 config.go:182] Loaded profile config "functional-304107": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (40.43s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-304107 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.02s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-304107 logs: (1.018131975s)
--- PASS: TestFunctional/serial/LogsCmd (1.02s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.05s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 logs --file /tmp/TestFunctionalserialLogsFileCmd96082571/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-304107 logs --file /tmp/TestFunctionalserialLogsFileCmd96082571/001/logs.txt: (1.052358374s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.05s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.99s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-304107 apply -f testdata/invalidsvc.yaml
E1207 22:37:45.446735  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/addons-549698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-304107
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-304107: exit status 115 (341.550362ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30130 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-304107 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.99s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-304107 config get cpus: exit status 14 (89.404181ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-304107 config get cpus: exit status 14 (69.513485ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-304107 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-304107 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (189.554299ms)

                                                
                                                
-- stdout --
	* [functional-304107] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22054
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22054-393577/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-393577/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 22:38:12.413271  447990 out.go:360] Setting OutFile to fd 1 ...
	I1207 22:38:12.413385  447990 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:38:12.413397  447990 out.go:374] Setting ErrFile to fd 2...
	I1207 22:38:12.413403  447990 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:38:12.413686  447990 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-393577/.minikube/bin
	I1207 22:38:12.414260  447990 out.go:368] Setting JSON to false
	I1207 22:38:12.415549  447990 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":4835,"bootTime":1765142257,"procs":260,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 22:38:12.415627  447990 start.go:143] virtualization: kvm guest
	I1207 22:38:12.417896  447990 out.go:179] * [functional-304107] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1207 22:38:12.419432  447990 out.go:179]   - MINIKUBE_LOCATION=22054
	I1207 22:38:12.419448  447990 notify.go:221] Checking for updates...
	I1207 22:38:12.422223  447990 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 22:38:12.423522  447990 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-393577/kubeconfig
	I1207 22:38:12.424855  447990 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-393577/.minikube
	I1207 22:38:12.426102  447990 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1207 22:38:12.427389  447990 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 22:38:12.429275  447990 config.go:182] Loaded profile config "functional-304107": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1207 22:38:12.430098  447990 driver.go:422] Setting default libvirt URI to qemu:///system
	I1207 22:38:12.456875  447990 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1207 22:38:12.457066  447990 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 22:38:12.523005  447990 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-07 22:38:12.511243084 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 22:38:12.523155  447990 docker.go:319] overlay module found
	I1207 22:38:12.525278  447990 out.go:179] * Using the docker driver based on existing profile
	I1207 22:38:12.526277  447990 start.go:309] selected driver: docker
	I1207 22:38:12.526292  447990 start.go:927] validating driver "docker" against &{Name:functional-304107 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-304107 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 22:38:12.526376  447990 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 22:38:12.528036  447990 out.go:203] 
	W1207 22:38:12.529202  447990 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1207 22:38:12.530379  447990 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-304107 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-304107 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-304107 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (168.602785ms)

                                                
                                                
-- stdout --
	* [functional-304107] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22054
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22054-393577/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-393577/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 22:38:02.995649  445666 out.go:360] Setting OutFile to fd 1 ...
	I1207 22:38:02.995775  445666 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:38:02.995785  445666 out.go:374] Setting ErrFile to fd 2...
	I1207 22:38:02.995792  445666 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:38:02.996095  445666 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-393577/.minikube/bin
	I1207 22:38:02.996544  445666 out.go:368] Setting JSON to false
	I1207 22:38:02.997726  445666 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":4826,"bootTime":1765142257,"procs":262,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 22:38:02.997797  445666 start.go:143] virtualization: kvm guest
	I1207 22:38:02.999726  445666 out.go:179] * [functional-304107] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1207 22:38:03.001237  445666 out.go:179]   - MINIKUBE_LOCATION=22054
	I1207 22:38:03.001225  445666 notify.go:221] Checking for updates...
	I1207 22:38:03.002524  445666 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 22:38:03.003869  445666 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-393577/kubeconfig
	I1207 22:38:03.004977  445666 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-393577/.minikube
	I1207 22:38:03.005998  445666 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1207 22:38:03.007014  445666 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 22:38:03.008470  445666 config.go:182] Loaded profile config "functional-304107": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1207 22:38:03.009058  445666 driver.go:422] Setting default libvirt URI to qemu:///system
	I1207 22:38:03.032708  445666 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1207 22:38:03.032863  445666 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 22:38:03.091343  445666 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-07 22:38:03.079966822 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 22:38:03.091456  445666 docker.go:319] overlay module found
	I1207 22:38:03.093930  445666 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1207 22:38:03.095032  445666 start.go:309] selected driver: docker
	I1207 22:38:03.095047  445666 start.go:927] validating driver "docker" against &{Name:functional-304107 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-304107 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 22:38:03.095125  445666 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 22:38:03.096893  445666 out.go:203] 
	W1207 22:38:03.098123  445666 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1207 22:38:03.099245  445666 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (12.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-304107 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-304107 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-bw6s8" [6b531caa-32e3-4efb-935f-ecd4e3b1f256] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-bw6s8" [6b531caa-32e3-4efb-935f-ecd4e3b1f256] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.003943266s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:30458
functional_test.go:1680: http://192.168.49.2:30458: success! body:
Request served by hello-node-connect-7d85dfc575-bw6s8

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:30458
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (12.52s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (36.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [6ce53ec7-1f2c-4ec0-8647-16fe85bdaeaf] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003578309s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-304107 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-304107 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-304107 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-304107 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [f56639cc-e175-445b-8e4d-3b00c330d081] Pending
helpers_test.go:352: "sp-pod" [f56639cc-e175-445b-8e4d-3b00c330d081] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [f56639cc-e175-445b-8e4d-3b00c330d081] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.003367741s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-304107 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-304107 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-304107 delete -f testdata/storage-provisioner/pod.yaml: (1.095920795s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-304107 apply -f testdata/storage-provisioner/pod.yaml
I1207 22:38:08.618381  397166 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [06c06f95-d37e-40e8-8cdd-2e54275cf2d3] Pending
helpers_test.go:352: "sp-pod" [06c06f95-d37e-40e8-8cdd-2e54275cf2d3] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [06c06f95-d37e-40e8-8cdd-2e54275cf2d3] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 18.003735226s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-304107 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (36.91s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 ssh -n functional-304107 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 cp functional-304107:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1873481159/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 ssh -n functional-304107 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 ssh -n functional-304107 "sudo cat /tmp/does/not/exist/cp-test.txt"
E1207 22:38:26.408838  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/addons-549698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/CpCmd (1.65s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (23.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-304107 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-4jlkm" [7ee1bb32-f00c-46d3-a6c1-96866e0917de] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-4jlkm" [7ee1bb32-f00c-46d3-a6c1-96866e0917de] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 17.00393865s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-304107 exec mysql-5bb876957f-4jlkm -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-304107 exec mysql-5bb876957f-4jlkm -- mysql -ppassword -e "show databases;": exit status 1 (171.901923ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1207 22:38:21.927083  397166 retry.go:31] will retry after 1.4746105s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-304107 exec mysql-5bb876957f-4jlkm -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-304107 exec mysql-5bb876957f-4jlkm -- mysql -ppassword -e "show databases;": exit status 1 (128.930634ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1207 22:38:23.531175  397166 retry.go:31] will retry after 1.746162342s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-304107 exec mysql-5bb876957f-4jlkm -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-304107 exec mysql-5bb876957f-4jlkm -- mysql -ppassword -e "show databases;": exit status 1 (129.995619ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1207 22:38:25.408322  397166 retry.go:31] will retry after 2.822006805s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-304107 exec mysql-5bb876957f-4jlkm -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (23.78s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/397166/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 ssh "sudo cat /etc/test/nested/copy/397166/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/397166.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 ssh "sudo cat /etc/ssl/certs/397166.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/397166.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 ssh "sudo cat /usr/share/ca-certificates/397166.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3971662.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 ssh "sudo cat /etc/ssl/certs/3971662.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/3971662.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 ssh "sudo cat /usr/share/ca-certificates/3971662.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.79s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-304107 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-304107 ssh "sudo systemctl is-active crio": exit status 1 (302.643689ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-304107 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.2
registry.k8s.io/kube-proxy:v1.34.2
registry.k8s.io/kube-controller-manager:v1.34.2
registry.k8s.io/kube-apiserver:v1.34.2
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-304107
docker.io/kicbase/echo-server:latest
docker.io/kicbase/echo-server:functional-304107
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-304107 image ls --format short --alsologtostderr:
I1207 22:38:27.217004  451915 out.go:360] Setting OutFile to fd 1 ...
I1207 22:38:27.217291  451915 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1207 22:38:27.217304  451915 out.go:374] Setting ErrFile to fd 2...
I1207 22:38:27.217311  451915 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1207 22:38:27.217737  451915 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-393577/.minikube/bin
I1207 22:38:27.218351  451915 config.go:182] Loaded profile config "functional-304107": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1207 22:38:27.218451  451915 config.go:182] Loaded profile config "functional-304107": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1207 22:38:27.218933  451915 cli_runner.go:164] Run: docker container inspect functional-304107 --format={{.State.Status}}
I1207 22:38:27.238701  451915 ssh_runner.go:195] Run: systemctl --version
I1207 22:38:27.238757  451915 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-304107
I1207 22:38:27.257576  451915 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33162 SSHKeyPath:/home/jenkins/minikube-integration/22054-393577/.minikube/machines/functional-304107/id_rsa Username:docker}
I1207 22:38:27.351810  451915 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-304107 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-304107 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-304107 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 442211: os: process already finished
helpers_test.go:525: unable to kill pid 441966: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-304107 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-304107 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬───────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG        │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼───────────────────┼───────────────┼────────┤
│ docker.io/library/minikube-local-cache-test │ functional-304107 │ d4667176eb768 │ 30B    │
│ docker.io/library/nginx                     │ alpine            │ d4918ca78576a │ 52.8MB │
│ docker.io/library/mysql                     │ 5.7               │ 5107333e08a87 │ 501MB  │
│ registry.k8s.io/pause                       │ 3.3               │ 0184c1613d929 │ 683kB  │
│ docker.io/library/nginx                     │ latest            │ 60adc2e137e75 │ 152MB  │
│ registry.k8s.io/coredns/coredns             │ v1.12.1           │ 52546a367cc9e │ 75MB   │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc      │ 56cc512116c8f │ 4.4MB  │
│ registry.k8s.io/kube-apiserver              │ v1.34.2           │ a5f569d49a979 │ 88MB   │
│ registry.k8s.io/kube-controller-manager     │ v1.34.2           │ 01e8bacf0f500 │ 74.9MB │
│ registry.k8s.io/etcd                        │ 3.6.5-0           │ a3e246e9556e9 │ 62.5MB │
│ registry.k8s.io/pause                       │ 3.10.1            │ cd073f4c5f6a8 │ 736kB  │
│ registry.k8s.io/pause                       │ 3.1               │ da86e6ba6ca19 │ 742kB  │
│ registry.k8s.io/kube-proxy                  │ v1.34.2           │ 8aa150647e88a │ 71.9MB │
│ registry.k8s.io/kube-scheduler              │ v1.34.2           │ 88320b5498ff2 │ 52.8MB │
│ docker.io/kicbase/echo-server               │ functional-304107 │ 9056ab77afb8e │ 4.94MB │
│ docker.io/kicbase/echo-server               │ latest            │ 9056ab77afb8e │ 4.94MB │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/pause                       │ latest            │ 350b164e7ae1d │ 240kB  │
└─────────────────────────────────────────────┴───────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-304107 image ls --format table --alsologtostderr:
I1207 22:38:27.820490  452271 out.go:360] Setting OutFile to fd 1 ...
I1207 22:38:27.820781  452271 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1207 22:38:27.820792  452271 out.go:374] Setting ErrFile to fd 2...
I1207 22:38:27.820796  452271 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1207 22:38:27.821038  452271 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-393577/.minikube/bin
I1207 22:38:27.821692  452271 config.go:182] Loaded profile config "functional-304107": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1207 22:38:27.821806  452271 config.go:182] Loaded profile config "functional-304107": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1207 22:38:27.822282  452271 cli_runner.go:164] Run: docker container inspect functional-304107 --format={{.State.Status}}
I1207 22:38:27.840479  452271 ssh_runner.go:195] Run: systemctl --version
I1207 22:38:27.840527  452271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-304107
I1207 22:38:27.859369  452271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33162 SSHKeyPath:/home/jenkins/minikube-integration/22054-393577/.minikube/machines/functional-304107/id_rsa Username:docker}
I1207 22:38:27.955003  452271 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-304107 image ls --format json --alsologtostderr:
[{"id":"a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.2"],"size":"88000000"},{"id":"8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.34.2"],"size":"71900000"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"736000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"152000000"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"62500000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a87
0d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.2"],"size":"74900000"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"75000000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-304107","docker.io/kicbase/echo-server:latest"],"size":"4940000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952","re
poDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.2"],"size":"52800000"},{"id":"d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"52800000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"d4667176eb7683a7d090cf7d477302f978783636b03e35875012ae8f3717dfbf","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-304107"],"size":"30"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-304107 image ls --format json --alsologtostderr:
I1207 22:38:27.596797  452162 out.go:360] Setting OutFile to fd 1 ...
I1207 22:38:27.597048  452162 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1207 22:38:27.597057  452162 out.go:374] Setting ErrFile to fd 2...
I1207 22:38:27.597061  452162 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1207 22:38:27.597259  452162 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-393577/.minikube/bin
I1207 22:38:27.597868  452162 config.go:182] Loaded profile config "functional-304107": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1207 22:38:27.597967  452162 config.go:182] Loaded profile config "functional-304107": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1207 22:38:27.598385  452162 cli_runner.go:164] Run: docker container inspect functional-304107 --format={{.State.Status}}
I1207 22:38:27.616656  452162 ssh_runner.go:195] Run: systemctl --version
I1207 22:38:27.616713  452162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-304107
I1207 22:38:27.634442  452162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33162 SSHKeyPath:/home/jenkins/minikube-integration/22054-393577/.minikube/machines/functional-304107/id_rsa Username:docker}
I1207 22:38:27.727558  452162 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-304107 image ls --format yaml --alsologtostderr:
- id: d4667176eb7683a7d090cf7d477302f978783636b03e35875012ae8f3717dfbf
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-304107
size: "30"
- id: a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.2
size: "88000000"
- id: 88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.2
size: "52800000"
- id: 8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.34.2
size: "71900000"
- id: d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "52800000"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10.1
size: "736000"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "75000000"
- id: 60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "152000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-304107
- docker.io/kicbase/echo-server:latest
size: "4940000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.2
size: "74900000"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "62500000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-304107 image ls --format yaml --alsologtostderr:
I1207 22:38:27.368073  452008 out.go:360] Setting OutFile to fd 1 ...
I1207 22:38:27.368419  452008 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1207 22:38:27.368433  452008 out.go:374] Setting ErrFile to fd 2...
I1207 22:38:27.368439  452008 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1207 22:38:27.368676  452008 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-393577/.minikube/bin
I1207 22:38:27.369267  452008 config.go:182] Loaded profile config "functional-304107": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1207 22:38:27.369363  452008 config.go:182] Loaded profile config "functional-304107": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1207 22:38:27.369958  452008 cli_runner.go:164] Run: docker container inspect functional-304107 --format={{.State.Status}}
I1207 22:38:27.388861  452008 ssh_runner.go:195] Run: systemctl --version
I1207 22:38:27.388911  452008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-304107
I1207 22:38:27.407691  452008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33162 SSHKeyPath:/home/jenkins/minikube-integration/22054-393577/.minikube/machines/functional-304107/id_rsa Username:docker}
I1207 22:38:27.503038  452008 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-304107 ssh pgrep buildkitd: exit status 1 (279.307489ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 image build -t localhost/my-image:functional-304107 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-304107 image build -t localhost/my-image:functional-304107 testdata/build --alsologtostderr: (2.90446176s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-304107 image build -t localhost/my-image:functional-304107 testdata/build --alsologtostderr:
I1207 22:38:27.719213  452217 out.go:360] Setting OutFile to fd 1 ...
I1207 22:38:27.719479  452217 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1207 22:38:27.719489  452217 out.go:374] Setting ErrFile to fd 2...
I1207 22:38:27.719493  452217 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1207 22:38:27.719710  452217 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-393577/.minikube/bin
I1207 22:38:27.720297  452217 config.go:182] Loaded profile config "functional-304107": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1207 22:38:27.721180  452217 config.go:182] Loaded profile config "functional-304107": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1207 22:38:27.721703  452217 cli_runner.go:164] Run: docker container inspect functional-304107 --format={{.State.Status}}
I1207 22:38:27.741654  452217 ssh_runner.go:195] Run: systemctl --version
I1207 22:38:27.741702  452217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-304107
I1207 22:38:27.762277  452217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33162 SSHKeyPath:/home/jenkins/minikube-integration/22054-393577/.minikube/machines/functional-304107/id_rsa Username:docker}
I1207 22:38:27.857627  452217 build_images.go:162] Building image from path: /tmp/build.3838617173.tar
I1207 22:38:27.857705  452217 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1207 22:38:27.866148  452217 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3838617173.tar
I1207 22:38:27.869805  452217 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3838617173.tar: stat -c "%s %y" /var/lib/minikube/build/build.3838617173.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3838617173.tar': No such file or directory
I1207 22:38:27.869839  452217 ssh_runner.go:362] scp /tmp/build.3838617173.tar --> /var/lib/minikube/build/build.3838617173.tar (3072 bytes)
I1207 22:38:27.888070  452217 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3838617173
I1207 22:38:27.895828  452217 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3838617173 -xf /var/lib/minikube/build/build.3838617173.tar
I1207 22:38:27.904933  452217 docker.go:361] Building image: /var/lib/minikube/build/build.3838617173
I1207 22:38:27.905007  452217 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-304107 /var/lib/minikube/build/build.3838617173
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.7s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.4s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.4s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:5c8500b67e13fe363bd10c3ee104130e86cff001d20a1cc1587f56bc576ce400 done
#8 naming to localhost/my-image:functional-304107 done
#8 DONE 0.0s
I1207 22:38:30.543082  452217 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-304107 /var/lib/minikube/build/build.3838617173: (2.638042115s)
I1207 22:38:30.543174  452217 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3838617173
I1207 22:38:30.551633  452217 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3838617173.tar
I1207 22:38:30.559201  452217 build_images.go:218] Built localhost/my-image:functional-304107 from /tmp/build.3838617173.tar
I1207 22:38:30.559247  452217 build_images.go:134] succeeded building to: functional-304107
I1207 22:38:30.559254  452217 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 image ls
E1207 22:39:48.333776  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/addons-549698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 22:42:04.472486  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/addons-549698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 22:42:32.176979  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/addons-549698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.75023833s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-304107
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.77s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-304107 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-304107 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [267fbcef-7099-4034-8356-29b574425a99] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [267fbcef-7099-4034-8356-29b574425a99] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.004417212s
I1207 22:37:59.930030  397166 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.23s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "352.62517ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "61.394217ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "331.465785ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "76.872775ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 image load --daemon kicbase/echo-server:functional-304107 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 image load --daemon kicbase/echo-server:functional-304107 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-304107
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 image load --daemon kicbase/echo-server:functional-304107 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 image save kicbase/echo-server:functional-304107 /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 image rm kicbase/echo-server:functional-304107 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 image load /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-304107
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 image save --daemon kicbase/echo-server:functional-304107 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-304107
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (12.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-304107 create deployment hello-node --image kicbase/echo-server
I1207 22:37:56.237688  397166 detect.go:223] nested VM detected
functional_test.go:1455: (dbg) Run:  kubectl --context functional-304107 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-lsdfr" [0ffcfac1-98f7-4471-8c60-dfba2f924772] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-lsdfr" [0ffcfac1-98f7-4471-8c60-dfba2f924772] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 12.00454484s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (12.17s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-304107 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.110.150.78 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-304107 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:514: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-304107 docker-env) && out/minikube-linux-amd64 status -p functional-304107"
functional_test.go:537: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-304107 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (18.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-304107 /tmp/TestFunctionalparallelMountCmdany-port242809858/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765147083107083946" to /tmp/TestFunctionalparallelMountCmdany-port242809858/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765147083107083946" to /tmp/TestFunctionalparallelMountCmdany-port242809858/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765147083107083946" to /tmp/TestFunctionalparallelMountCmdany-port242809858/001/test-1765147083107083946
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-304107 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (299.046538ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1207 22:38:03.406477  397166 retry.go:31] will retry after 739.649445ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  7 22:38 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  7 22:38 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  7 22:38 test-1765147083107083946
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 ssh cat /mount-9p/test-1765147083107083946
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-304107 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [27e97b73-09c8-4ca5-bfbb-f42285579fac] Pending
helpers_test.go:352: "busybox-mount" [27e97b73-09c8-4ca5-bfbb-f42285579fac] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [27e97b73-09c8-4ca5-bfbb-f42285579fac] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [27e97b73-09c8-4ca5-bfbb-f42285579fac] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 15.004402962s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-304107 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-304107 /tmp/TestFunctionalparallelMountCmdany-port242809858/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (18.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 service list -o json
functional_test.go:1504: Took "950.186934ms" to run "out/minikube-linux-amd64 -p functional-304107 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:31054
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31054
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-304107 /tmp/TestFunctionalparallelMountCmdspecific-port680109933/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-304107 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (275.531849ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1207 22:38:21.413817  397166 retry.go:31] will retry after 642.961468ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-304107 /tmp/TestFunctionalparallelMountCmdspecific-port680109933/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-304107 ssh "sudo umount -f /mount-9p": exit status 1 (281.823535ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-304107 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-304107 /tmp/TestFunctionalparallelMountCmdspecific-port680109933/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.98s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-304107 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4148170977/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-304107 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4148170977/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-304107 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4148170977/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-304107 ssh "findmnt -T" /mount1: exit status 1 (343.608443ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1207 22:38:23.466011  397166 retry.go:31] will retry after 580.634607ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-304107 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-304107 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-304107 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4148170977/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-304107 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4148170977/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-304107 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4148170977/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.82s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-304107
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-304107
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-304107
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22054-393577/.minikube/files/etc/test/nested/copy/397166/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (58.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-442811 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-beta.0
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-442811 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-beta.0: (58.248154798s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (58.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (40.69s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1207 22:44:16.220362  397166 config.go:182] Loaded profile config "functional-442811": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-442811 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-442811 --alsologtostderr -v=8: (40.689568636s)
functional_test.go:678: soft start took 40.689982688s for "functional-442811" cluster.
I1207 22:44:56.910344  397166 config.go:182] Loaded profile config "functional-442811": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (40.69s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-442811 get po -A
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (2.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (2.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.36s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-442811 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach2292499392/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 cache add minikube-local-cache-test:functional-442811
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-442811 cache add minikube-local-cache-test:functional-442811: (1.070197678s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 cache delete minikube-local-cache-test:functional-442811
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-442811
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.36s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-442811 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (288.792101ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 kubectl -- --context functional-442811 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-442811 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (41.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-442811 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-442811 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.395873582s)
functional_test.go:776: restart took 41.396047032s for "functional-442811" cluster.
I1207 22:45:44.360147  397166 config.go:182] Loaded profile config "functional-442811": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (41.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-442811 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.03s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-442811 logs: (1.034261462s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.03s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.03s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs1146691584/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-442811 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs1146691584/001/logs.txt: (1.033394681s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.03s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (4.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-442811 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-442811
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-442811: exit status 115 (343.184992ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32144 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-442811 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (4.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.46s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-442811 config get cpus: exit status 14 (71.638149ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-442811 config get cpus: exit status 14 (77.751922ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.46s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-442811 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-442811 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-beta.0: exit status 23 (168.456036ms)

                                                
                                                
-- stdout --
	* [functional-442811] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22054
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22054-393577/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-393577/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 22:52:15.184373  481517 out.go:360] Setting OutFile to fd 1 ...
	I1207 22:52:15.184479  481517 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:52:15.184488  481517 out.go:374] Setting ErrFile to fd 2...
	I1207 22:52:15.184492  481517 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:52:15.184673  481517 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-393577/.minikube/bin
	I1207 22:52:15.185090  481517 out.go:368] Setting JSON to false
	I1207 22:52:15.186040  481517 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":5678,"bootTime":1765142257,"procs":235,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 22:52:15.186094  481517 start.go:143] virtualization: kvm guest
	I1207 22:52:15.187768  481517 out.go:179] * [functional-442811] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1207 22:52:15.189018  481517 out.go:179]   - MINIKUBE_LOCATION=22054
	I1207 22:52:15.189037  481517 notify.go:221] Checking for updates...
	I1207 22:52:15.191123  481517 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 22:52:15.192323  481517 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-393577/kubeconfig
	I1207 22:52:15.193414  481517 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-393577/.minikube
	I1207 22:52:15.194404  481517 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1207 22:52:15.195609  481517 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 22:52:15.197060  481517 config.go:182] Loaded profile config "functional-442811": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1207 22:52:15.197673  481517 driver.go:422] Setting default libvirt URI to qemu:///system
	I1207 22:52:15.226903  481517 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1207 22:52:15.227021  481517 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 22:52:15.282806  481517 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:54 SystemTime:2025-12-07 22:52:15.27238236 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 22:52:15.282916  481517 docker.go:319] overlay module found
	I1207 22:52:15.284561  481517 out.go:179] * Using the docker driver based on existing profile
	I1207 22:52:15.285627  481517 start.go:309] selected driver: docker
	I1207 22:52:15.285641  481517 start.go:927] validating driver "docker" against &{Name:functional-442811 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-442811 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mo
untOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 22:52:15.285745  481517 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 22:52:15.287399  481517 out.go:203] 
	W1207 22:52:15.288567  481517 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1207 22:52:15.289616  481517 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-442811 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-442811 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-442811 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-beta.0: exit status 23 (170.25603ms)

                                                
                                                
-- stdout --
	* [functional-442811] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22054
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22054-393577/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-393577/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 22:52:15.018056  481433 out.go:360] Setting OutFile to fd 1 ...
	I1207 22:52:15.018153  481433 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:52:15.018158  481433 out.go:374] Setting ErrFile to fd 2...
	I1207 22:52:15.018162  481433 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:52:15.018545  481433 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-393577/.minikube/bin
	I1207 22:52:15.019008  481433 out.go:368] Setting JSON to false
	I1207 22:52:15.019995  481433 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":5678,"bootTime":1765142257,"procs":235,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 22:52:15.020053  481433 start.go:143] virtualization: kvm guest
	I1207 22:52:15.022159  481433 out.go:179] * [functional-442811] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1207 22:52:15.023336  481433 out.go:179]   - MINIKUBE_LOCATION=22054
	I1207 22:52:15.023408  481433 notify.go:221] Checking for updates...
	I1207 22:52:15.025406  481433 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 22:52:15.026587  481433 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-393577/kubeconfig
	I1207 22:52:15.027682  481433 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-393577/.minikube
	I1207 22:52:15.028697  481433 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1207 22:52:15.029857  481433 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 22:52:15.031302  481433 config.go:182] Loaded profile config "functional-442811": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1207 22:52:15.031873  481433 driver.go:422] Setting default libvirt URI to qemu:///system
	I1207 22:52:15.057471  481433 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1207 22:52:15.057573  481433 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 22:52:15.112288  481433 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:54 SystemTime:2025-12-07 22:52:15.102696961 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 22:52:15.112427  481433 docker.go:319] overlay module found
	I1207 22:52:15.114311  481433 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1207 22:52:15.115524  481433 start.go:309] selected driver: docker
	I1207 22:52:15.115543  481433 start.go:927] validating driver "docker" against &{Name:functional-442811 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-442811 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mo
untOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 22:52:15.115763  481433 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 22:52:15.117679  481433 out.go:203] 
	W1207 22:52:15.118954  481433 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1207 22:52:15.120195  481433 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (0.96s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 status -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (0.96s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.61s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.61s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.87s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 ssh -n functional-442811 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 cp functional-442811:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp757346297/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 ssh -n functional-442811 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 ssh -n functional-442811 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.87s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/397166/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 ssh "sudo cat /etc/test/nested/copy/397166/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.89s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/397166.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 ssh "sudo cat /etc/ssl/certs/397166.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/397166.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 ssh "sudo cat /usr/share/ca-certificates/397166.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3971662.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 ssh "sudo cat /etc/ssl/certs/3971662.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/3971662.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 ssh "sudo cat /usr/share/ca-certificates/3971662.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.89s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-442811 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-442811 ssh "sudo systemctl is-active crio": exit status 1 (308.159937ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.49s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.49s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-442811 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-beta.0
registry.k8s.io/kube-proxy:v1.35.0-beta.0
registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
registry.k8s.io/kube-apiserver:v1.35.0-beta.0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.13.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/minikube-local-cache-test:functional-442811
docker.io/kicbase/echo-server:functional-442811
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-442811 image ls --format short --alsologtostderr:
I1207 22:52:19.944952  483248 out.go:360] Setting OutFile to fd 1 ...
I1207 22:52:19.945062  483248 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1207 22:52:19.945067  483248 out.go:374] Setting ErrFile to fd 2...
I1207 22:52:19.945071  483248 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1207 22:52:19.945283  483248 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-393577/.minikube/bin
I1207 22:52:19.945840  483248 config.go:182] Loaded profile config "functional-442811": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1207 22:52:19.945935  483248 config.go:182] Loaded profile config "functional-442811": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1207 22:52:19.946362  483248 cli_runner.go:164] Run: docker container inspect functional-442811 --format={{.State.Status}}
I1207 22:52:19.964852  483248 ssh_runner.go:195] Run: systemctl --version
I1207 22:52:19.964901  483248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-442811
I1207 22:52:19.983282  483248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33167 SSHKeyPath:/home/jenkins/minikube-integration/22054-393577/.minikube/machines/functional-442811/id_rsa Username:docker}
I1207 22:52:20.076698  483248 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-442811 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬───────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG        │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼───────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-proxy                  │ v1.35.0-beta.0    │ 8a4ded35a3eb1 │ 70.7MB │
│ registry.k8s.io/pause                       │ 3.10.1            │ cd073f4c5f6a8 │ 736kB  │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc      │ 56cc512116c8f │ 4.4MB  │
│ registry.k8s.io/kube-scheduler              │ v1.35.0-beta.0    │ 7bb6219ddab95 │ 51.7MB │
│ registry.k8s.io/etcd                        │ 3.6.5-0           │ a3e246e9556e9 │ 62.5MB │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                │ 6e38f40d628db │ 31.5MB │
│ localhost/my-image                          │ functional-442811 │ d43ce4a31bd01 │ 1.24MB │
│ registry.k8s.io/kube-apiserver              │ v1.35.0-beta.0    │ aa9d02839d8de │ 89.7MB │
│ registry.k8s.io/kube-controller-manager     │ v1.35.0-beta.0    │ 45f3cc72d235f │ 75.8MB │
│ registry.k8s.io/coredns/coredns             │ v1.13.1           │ aa5e3ebc0dfed │ 78.1MB │
│ docker.io/library/minikube-local-cache-test │ functional-442811 │ d4667176eb768 │ 30B    │
│ docker.io/kicbase/echo-server               │ functional-442811 │ 9056ab77afb8e │ 4.94MB │
│ registry.k8s.io/pause                       │ 3.3               │ 0184c1613d929 │ 683kB  │
│ registry.k8s.io/pause                       │ 3.1               │ da86e6ba6ca19 │ 742kB  │
│ registry.k8s.io/pause                       │ latest            │ 350b164e7ae1d │ 240kB  │
└─────────────────────────────────────────────┴───────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-442811 image ls --format table --alsologtostderr:
I1207 22:52:23.996164  483780 out.go:360] Setting OutFile to fd 1 ...
I1207 22:52:23.996486  483780 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1207 22:52:23.996498  483780 out.go:374] Setting ErrFile to fd 2...
I1207 22:52:23.996505  483780 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1207 22:52:23.996779  483780 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-393577/.minikube/bin
I1207 22:52:23.997398  483780 config.go:182] Loaded profile config "functional-442811": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1207 22:52:23.997536  483780 config.go:182] Loaded profile config "functional-442811": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1207 22:52:23.998034  483780 cli_runner.go:164] Run: docker container inspect functional-442811 --format={{.State.Status}}
I1207 22:52:24.017160  483780 ssh_runner.go:195] Run: systemctl --version
I1207 22:52:24.017233  483780 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-442811
I1207 22:52:24.035527  483780 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33167 SSHKeyPath:/home/jenkins/minikube-integration/22054-393577/.minikube/machines/functional-442811/id_rsa Username:docker}
I1207 22:52:24.128401  483780 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
E1207 22:52:49.706148  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/functional-304107/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 22:53:17.408102  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/functional-304107/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 22:53:27.539330  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/addons-549698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-442811 image ls --format json --alsologtostderr:
[{"id":"d43ce4a31bd019fa6fa797b88e13ed03a32332c0f6857e1ee17a49bd3dc5f5b5","repoDigests":[],"repoTags":["localhost/my-image:functional-442811"],"size":"1240000"},{"id":"d4667176eb7683a7d090cf7d477302f978783636b03e35875012ae8f3717dfbf","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-442811"],"size":"30"},{"id":"aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"78100000"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"62500000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-442811"],"size":"4940000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"7bb6219ddab95bdabbef83f051bee
4fdd14b6f791aaa3121080cb2c58ada2e46","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-beta.0"],"size":"51700000"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"736000"},{"id":"aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-beta.0"],"size":"89700000"},{"id":"45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"],"size":"75800000"},{"id":"8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-beta.0"],"size":"70700000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","re
poDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-442811 image ls --format json --alsologtostderr:
I1207 22:52:23.773452  483727 out.go:360] Setting OutFile to fd 1 ...
I1207 22:52:23.773705  483727 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1207 22:52:23.773713  483727 out.go:374] Setting ErrFile to fd 2...
I1207 22:52:23.773717  483727 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1207 22:52:23.773948  483727 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-393577/.minikube/bin
I1207 22:52:23.774459  483727 config.go:182] Loaded profile config "functional-442811": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1207 22:52:23.774546  483727 config.go:182] Loaded profile config "functional-442811": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1207 22:52:23.775001  483727 cli_runner.go:164] Run: docker container inspect functional-442811 --format={{.State.Status}}
I1207 22:52:23.793218  483727 ssh_runner.go:195] Run: systemctl --version
I1207 22:52:23.793268  483727 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-442811
I1207 22:52:23.811202  483727 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33167 SSHKeyPath:/home/jenkins/minikube-integration/22054-393577/.minikube/machines/functional-442811/id_rsa Username:docker}
I1207 22:52:23.903354  483727 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-442811 image ls --format yaml --alsologtostderr:
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-442811
size: "4940000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-beta.0
size: "51700000"
- id: 45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
size: "75800000"
- id: aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "78100000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: d4667176eb7683a7d090cf7d477302f978783636b03e35875012ae8f3717dfbf
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-442811
size: "30"
- id: aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-beta.0
size: "89700000"
- id: 8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-beta.0
size: "70700000"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "62500000"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10.1
size: "736000"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-442811 image ls --format yaml --alsologtostderr:
I1207 22:52:20.168214  483303 out.go:360] Setting OutFile to fd 1 ...
I1207 22:52:20.168480  483303 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1207 22:52:20.168490  483303 out.go:374] Setting ErrFile to fd 2...
I1207 22:52:20.168497  483303 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1207 22:52:20.168739  483303 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-393577/.minikube/bin
I1207 22:52:20.169302  483303 config.go:182] Loaded profile config "functional-442811": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1207 22:52:20.169420  483303 config.go:182] Loaded profile config "functional-442811": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1207 22:52:20.169896  483303 cli_runner.go:164] Run: docker container inspect functional-442811 --format={{.State.Status}}
I1207 22:52:20.188022  483303 ssh_runner.go:195] Run: systemctl --version
I1207 22:52:20.188071  483303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-442811
I1207 22:52:20.206755  483303 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33167 SSHKeyPath:/home/jenkins/minikube-integration/22054-393577/.minikube/machines/functional-442811/id_rsa Username:docker}
I1207 22:52:20.300411  483303 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.38s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-442811 ssh pgrep buildkitd: exit status 1 (268.877925ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 image build -t localhost/my-image:functional-442811 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-442811 image build -t localhost/my-image:functional-442811 testdata/build --alsologtostderr: (2.892308725s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-442811 image build -t localhost/my-image:functional-442811 testdata/build --alsologtostderr:
I1207 22:52:20.659925  483463 out.go:360] Setting OutFile to fd 1 ...
I1207 22:52:20.660218  483463 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1207 22:52:20.660228  483463 out.go:374] Setting ErrFile to fd 2...
I1207 22:52:20.660232  483463 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1207 22:52:20.660442  483463 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-393577/.minikube/bin
I1207 22:52:20.661046  483463 config.go:182] Loaded profile config "functional-442811": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1207 22:52:20.661776  483463 config.go:182] Loaded profile config "functional-442811": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1207 22:52:20.662253  483463 cli_runner.go:164] Run: docker container inspect functional-442811 --format={{.State.Status}}
I1207 22:52:20.680482  483463 ssh_runner.go:195] Run: systemctl --version
I1207 22:52:20.680552  483463 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-442811
I1207 22:52:20.699703  483463 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33167 SSHKeyPath:/home/jenkins/minikube-integration/22054-393577/.minikube/machines/functional-442811/id_rsa Username:docker}
I1207 22:52:20.792375  483463 build_images.go:162] Building image from path: /tmp/build.1183009271.tar
I1207 22:52:20.792431  483463 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1207 22:52:20.800689  483463 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1183009271.tar
I1207 22:52:20.804558  483463 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1183009271.tar: stat -c "%s %y" /var/lib/minikube/build/build.1183009271.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1183009271.tar': No such file or directory
I1207 22:52:20.804591  483463 ssh_runner.go:362] scp /tmp/build.1183009271.tar --> /var/lib/minikube/build/build.1183009271.tar (3072 bytes)
I1207 22:52:20.823745  483463 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1183009271
I1207 22:52:20.831832  483463 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1183009271 -xf /var/lib/minikube/build/build.1183009271.tar
I1207 22:52:20.839923  483463 docker.go:361] Building image: /var/lib/minikube/build/build.1183009271
I1207 22:52:20.839987  483463 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-442811 /var/lib/minikube/build/build.1183009271
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.7s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.4s
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.4s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:d43ce4a31bd019fa6fa797b88e13ed03a32332c0f6857e1ee17a49bd3dc5f5b5 done
#8 naming to localhost/my-image:functional-442811 done
#8 DONE 0.0s
I1207 22:52:23.467552  483463 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-442811 /var/lib/minikube/build/build.1183009271: (2.627535436s)
I1207 22:52:23.467632  483463 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1183009271
I1207 22:52:23.476166  483463 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1183009271.tar
I1207 22:52:23.484134  483463 build_images.go:218] Built localhost/my-image:functional-442811 from /tmp/build.1183009271.tar
I1207 22:52:23.484166  483463 build_images.go:134] succeeded building to: functional-442811
I1207 22:52:23.484170  483463 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.38s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.84s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-442811
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.84s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv/bash (1.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv/bash
functional_test.go:514: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-442811 docker-env) && out/minikube-linux-amd64 status -p functional-442811"
functional_test.go:537: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-442811 docker-env) && docker images"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv/bash (1.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 image load --daemon kicbase/echo-server:functional-442811 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.89s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 image load --daemon kicbase/echo-server:functional-442811 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.89s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-442811 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-442811 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-442811 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 472725: os: process already finished
helpers_test.go:519: unable to terminate pid 472452: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-442811 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.64s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-442811
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 image load --daemon kicbase/echo-server:functional-442811 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.64s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-442811 tunnel --alsologtostderr]
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 image save kicbase/echo-server:functional-442811 /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.45s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 image rm kicbase/echo-server:functional-442811 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.6s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 image load /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.60s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.36s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-442811
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 image save --daemon kicbase/echo-server:functional-442811 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-442811
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.36s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-442811 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "334.577871ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "67.017818ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "332.976891ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "64.728748ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (7.48s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-442811 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1395108936/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765147922617282077" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1395108936/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765147922617282077" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1395108936/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765147922617282077" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1395108936/001/test-1765147922617282077
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-442811 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (280.548485ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1207 22:52:02.898118  397166 retry.go:31] will retry after 256.912437ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  7 22:52 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  7 22:52 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  7 22:52 test-1765147922617282077
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 ssh cat /mount-9p/test-1765147922617282077
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-442811 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [c841ff39-bf01-4031-910c-0e5b10ccf76f] Pending
E1207 22:52:04.468892  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/addons-549698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox-mount" [c841ff39-bf01-4031-910c-0e5b10ccf76f] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [c841ff39-bf01-4031-910c-0e5b10ccf76f] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [c841ff39-bf01-4031-910c-0e5b10ccf76f] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003679119s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-442811 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-442811 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1395108936/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (7.48s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.98s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-442811 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2396619733/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-442811 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (283.733727ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1207 22:52:10.376388  397166 retry.go:31] will retry after 662.594143ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-442811 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2396619733/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-442811 ssh "sudo umount -f /mount-9p": exit status 1 (274.19418ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-442811 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-442811 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2396619733/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.98s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.93s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-442811 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo68706730/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-442811 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo68706730/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-442811 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo68706730/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-442811 ssh "findmnt -T" /mount1: exit status 1 (340.442199ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1207 22:52:12.413028  397166 retry.go:31] will retry after 709.465132ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-442811 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-442811 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo68706730/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-442811 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo68706730/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-442811 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo68706730/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.93s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (1.71s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-442811 service list: (1.714130942s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (1.71s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (1.7s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-442811 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-442811 service list -o json: (1.701467624s)
functional_test.go:1504: Took "1.701582939s" to run "out/minikube-linux-amd64 -p functional-442811 service list -o json"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (1.70s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-442811
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-442811
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-442811
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (123.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker
E1207 23:02:04.468289  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/addons-549698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:02:49.706688  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/functional-304107/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-163597 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker: (2m2.932949889s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (123.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-163597 kubectl -- rollout status deployment/busybox: (3.681267522s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 kubectl -- exec busybox-7b57f96db7-6c29v -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 kubectl -- exec busybox-7b57f96db7-l9sb2 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 kubectl -- exec busybox-7b57f96db7-pwc9j -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 kubectl -- exec busybox-7b57f96db7-6c29v -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 kubectl -- exec busybox-7b57f96db7-l9sb2 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 kubectl -- exec busybox-7b57f96db7-pwc9j -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 kubectl -- exec busybox-7b57f96db7-6c29v -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 kubectl -- exec busybox-7b57f96db7-l9sb2 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 kubectl -- exec busybox-7b57f96db7-pwc9j -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 kubectl -- exec busybox-7b57f96db7-6c29v -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 kubectl -- exec busybox-7b57f96db7-6c29v -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 kubectl -- exec busybox-7b57f96db7-l9sb2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 kubectl -- exec busybox-7b57f96db7-l9sb2 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 kubectl -- exec busybox-7b57f96db7-pwc9j -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 kubectl -- exec busybox-7b57f96db7-pwc9j -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (34.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 node add --alsologtostderr -v 5
E1207 23:04:12.769709  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/functional-304107/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-163597 node add --alsologtostderr -v 5: (33.179927995s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (34.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-163597 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (17.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 cp testdata/cp-test.txt ha-163597:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 ssh -n ha-163597 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 cp ha-163597:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3067446594/001/cp-test_ha-163597.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 ssh -n ha-163597 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 cp ha-163597:/home/docker/cp-test.txt ha-163597-m02:/home/docker/cp-test_ha-163597_ha-163597-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 ssh -n ha-163597 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 ssh -n ha-163597-m02 "sudo cat /home/docker/cp-test_ha-163597_ha-163597-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 cp ha-163597:/home/docker/cp-test.txt ha-163597-m03:/home/docker/cp-test_ha-163597_ha-163597-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 ssh -n ha-163597 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 ssh -n ha-163597-m03 "sudo cat /home/docker/cp-test_ha-163597_ha-163597-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 cp ha-163597:/home/docker/cp-test.txt ha-163597-m04:/home/docker/cp-test_ha-163597_ha-163597-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 ssh -n ha-163597 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 ssh -n ha-163597-m04 "sudo cat /home/docker/cp-test_ha-163597_ha-163597-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 cp testdata/cp-test.txt ha-163597-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 ssh -n ha-163597-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 cp ha-163597-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3067446594/001/cp-test_ha-163597-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 ssh -n ha-163597-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 cp ha-163597-m02:/home/docker/cp-test.txt ha-163597:/home/docker/cp-test_ha-163597-m02_ha-163597.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 ssh -n ha-163597-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 ssh -n ha-163597 "sudo cat /home/docker/cp-test_ha-163597-m02_ha-163597.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 cp ha-163597-m02:/home/docker/cp-test.txt ha-163597-m03:/home/docker/cp-test_ha-163597-m02_ha-163597-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 ssh -n ha-163597-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 ssh -n ha-163597-m03 "sudo cat /home/docker/cp-test_ha-163597-m02_ha-163597-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 cp ha-163597-m02:/home/docker/cp-test.txt ha-163597-m04:/home/docker/cp-test_ha-163597-m02_ha-163597-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 ssh -n ha-163597-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 ssh -n ha-163597-m04 "sudo cat /home/docker/cp-test_ha-163597-m02_ha-163597-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 cp testdata/cp-test.txt ha-163597-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 ssh -n ha-163597-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 cp ha-163597-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3067446594/001/cp-test_ha-163597-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 ssh -n ha-163597-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 cp ha-163597-m03:/home/docker/cp-test.txt ha-163597:/home/docker/cp-test_ha-163597-m03_ha-163597.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 ssh -n ha-163597-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 ssh -n ha-163597 "sudo cat /home/docker/cp-test_ha-163597-m03_ha-163597.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 cp ha-163597-m03:/home/docker/cp-test.txt ha-163597-m02:/home/docker/cp-test_ha-163597-m03_ha-163597-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 ssh -n ha-163597-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 ssh -n ha-163597-m02 "sudo cat /home/docker/cp-test_ha-163597-m03_ha-163597-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 cp ha-163597-m03:/home/docker/cp-test.txt ha-163597-m04:/home/docker/cp-test_ha-163597-m03_ha-163597-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 ssh -n ha-163597-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 ssh -n ha-163597-m04 "sudo cat /home/docker/cp-test_ha-163597-m03_ha-163597-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 cp testdata/cp-test.txt ha-163597-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 ssh -n ha-163597-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 cp ha-163597-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3067446594/001/cp-test_ha-163597-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 ssh -n ha-163597-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 cp ha-163597-m04:/home/docker/cp-test.txt ha-163597:/home/docker/cp-test_ha-163597-m04_ha-163597.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 ssh -n ha-163597-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 ssh -n ha-163597 "sudo cat /home/docker/cp-test_ha-163597-m04_ha-163597.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 cp ha-163597-m04:/home/docker/cp-test.txt ha-163597-m02:/home/docker/cp-test_ha-163597-m04_ha-163597-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 ssh -n ha-163597-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 ssh -n ha-163597-m02 "sudo cat /home/docker/cp-test_ha-163597-m04_ha-163597-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 cp ha-163597-m04:/home/docker/cp-test.txt ha-163597-m03:/home/docker/cp-test_ha-163597-m04_ha-163597-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 ssh -n ha-163597-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 ssh -n ha-163597-m03 "sudo cat /home/docker/cp-test_ha-163597-m04_ha-163597-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (17.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-163597 node stop m02 --alsologtostderr -v 5: (10.957460397s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-163597 status --alsologtostderr -v 5: exit status 7 (716.998384ms)

                                                
                                                
-- stdout --
	ha-163597
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-163597-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-163597-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-163597-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 23:04:58.486590  516734 out.go:360] Setting OutFile to fd 1 ...
	I1207 23:04:58.486712  516734 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:04:58.486716  516734 out.go:374] Setting ErrFile to fd 2...
	I1207 23:04:58.486720  516734 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:04:58.486978  516734 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-393577/.minikube/bin
	I1207 23:04:58.487165  516734 out.go:368] Setting JSON to false
	I1207 23:04:58.487195  516734 mustload.go:66] Loading cluster: ha-163597
	I1207 23:04:58.487334  516734 notify.go:221] Checking for updates...
	I1207 23:04:58.487565  516734 config.go:182] Loaded profile config "ha-163597": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1207 23:04:58.487583  516734 status.go:174] checking status of ha-163597 ...
	I1207 23:04:58.488080  516734 cli_runner.go:164] Run: docker container inspect ha-163597 --format={{.State.Status}}
	I1207 23:04:58.510031  516734 status.go:371] ha-163597 host status = "Running" (err=<nil>)
	I1207 23:04:58.510090  516734 host.go:66] Checking if "ha-163597" exists ...
	I1207 23:04:58.510411  516734 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-163597
	I1207 23:04:58.529156  516734 host.go:66] Checking if "ha-163597" exists ...
	I1207 23:04:58.529430  516734 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1207 23:04:58.529500  516734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-163597
	I1207 23:04:58.548054  516734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33172 SSHKeyPath:/home/jenkins/minikube-integration/22054-393577/.minikube/machines/ha-163597/id_rsa Username:docker}
	I1207 23:04:58.640805  516734 ssh_runner.go:195] Run: systemctl --version
	I1207 23:04:58.648434  516734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 23:04:58.662545  516734 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 23:04:58.720045  516734 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-07 23:04:58.709557352 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 23:04:58.720630  516734 kubeconfig.go:125] found "ha-163597" server: "https://192.168.49.254:8443"
	I1207 23:04:58.720664  516734 api_server.go:166] Checking apiserver status ...
	I1207 23:04:58.720699  516734 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 23:04:58.733950  516734 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2229/cgroup
	W1207 23:04:58.743313  516734 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2229/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1207 23:04:58.743394  516734 ssh_runner.go:195] Run: ls
	I1207 23:04:58.747681  516734 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1207 23:04:58.751946  516734 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1207 23:04:58.751990  516734 status.go:463] ha-163597 apiserver status = Running (err=<nil>)
	I1207 23:04:58.752012  516734 status.go:176] ha-163597 status: &{Name:ha-163597 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1207 23:04:58.752034  516734 status.go:174] checking status of ha-163597-m02 ...
	I1207 23:04:58.752294  516734 cli_runner.go:164] Run: docker container inspect ha-163597-m02 --format={{.State.Status}}
	I1207 23:04:58.771340  516734 status.go:371] ha-163597-m02 host status = "Stopped" (err=<nil>)
	I1207 23:04:58.771372  516734 status.go:384] host is not running, skipping remaining checks
	I1207 23:04:58.771378  516734 status.go:176] ha-163597-m02 status: &{Name:ha-163597-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1207 23:04:58.771401  516734 status.go:174] checking status of ha-163597-m03 ...
	I1207 23:04:58.771704  516734 cli_runner.go:164] Run: docker container inspect ha-163597-m03 --format={{.State.Status}}
	I1207 23:04:58.790669  516734 status.go:371] ha-163597-m03 host status = "Running" (err=<nil>)
	I1207 23:04:58.790692  516734 host.go:66] Checking if "ha-163597-m03" exists ...
	I1207 23:04:58.790939  516734 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-163597-m03
	I1207 23:04:58.809108  516734 host.go:66] Checking if "ha-163597-m03" exists ...
	I1207 23:04:58.809393  516734 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1207 23:04:58.809440  516734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-163597-m03
	I1207 23:04:58.828212  516734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33182 SSHKeyPath:/home/jenkins/minikube-integration/22054-393577/.minikube/machines/ha-163597-m03/id_rsa Username:docker}
	I1207 23:04:58.921754  516734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 23:04:58.935545  516734 kubeconfig.go:125] found "ha-163597" server: "https://192.168.49.254:8443"
	I1207 23:04:58.935572  516734 api_server.go:166] Checking apiserver status ...
	I1207 23:04:58.935626  516734 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 23:04:58.948864  516734 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2114/cgroup
	W1207 23:04:58.957794  516734 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2114/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1207 23:04:58.957865  516734 ssh_runner.go:195] Run: ls
	I1207 23:04:58.961859  516734 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1207 23:04:58.966209  516734 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1207 23:04:58.966244  516734 status.go:463] ha-163597-m03 apiserver status = Running (err=<nil>)
	I1207 23:04:58.966255  516734 status.go:176] ha-163597-m03 status: &{Name:ha-163597-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1207 23:04:58.966286  516734 status.go:174] checking status of ha-163597-m04 ...
	I1207 23:04:58.966627  516734 cli_runner.go:164] Run: docker container inspect ha-163597-m04 --format={{.State.Status}}
	I1207 23:04:58.989010  516734 status.go:371] ha-163597-m04 host status = "Running" (err=<nil>)
	I1207 23:04:58.989041  516734 host.go:66] Checking if "ha-163597-m04" exists ...
	I1207 23:04:58.989309  516734 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-163597-m04
	I1207 23:04:59.008772  516734 host.go:66] Checking if "ha-163597-m04" exists ...
	I1207 23:04:59.009094  516734 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1207 23:04:59.009132  516734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-163597-m04
	I1207 23:04:59.027684  516734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33187 SSHKeyPath:/home/jenkins/minikube-integration/22054-393577/.minikube/machines/ha-163597-m04/id_rsa Username:docker}
	I1207 23:04:59.120353  516734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 23:04:59.135704  516734 status.go:176] ha-163597-m04 status: &{Name:ha-163597-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (37.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-163597 node start m02 --alsologtostderr -v 5: (36.474166187s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-amd64 -p ha-163597 status --alsologtostderr -v 5: (1.011426741s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (37.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (150.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 stop --alsologtostderr -v 5
E1207 23:05:51.612614  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/functional-442811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:05:51.619073  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/functional-442811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:05:51.630445  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/functional-442811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:05:51.651912  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/functional-442811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:05:51.693380  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/functional-442811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:05:51.774872  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/functional-442811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:05:51.937024  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/functional-442811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:05:52.258918  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/functional-442811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:05:52.901108  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/functional-442811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:05:54.182747  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/functional-442811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:05:56.744114  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/functional-442811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:06:01.866390  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/functional-442811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:06:12.107904  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/functional-442811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-163597 stop --alsologtostderr -v 5: (34.330834857s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 start --wait true --alsologtostderr -v 5
E1207 23:06:32.589834  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/functional-442811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:07:04.468810  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/addons-549698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:07:13.552775  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/functional-442811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:07:49.705891  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/functional-304107/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-163597 start --wait true --alsologtostderr -v 5: (1m55.943990516s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (150.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (9.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-163597 node delete m03 --alsologtostderr -v 5: (8.844520635s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (9.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (32.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 stop --alsologtostderr -v 5
E1207 23:08:35.475123  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/functional-442811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-163597 stop --alsologtostderr -v 5: (32.527511104s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-163597 status --alsologtostderr -v 5: exit status 7 (124.309623ms)

                                                
                                                
-- stdout --
	ha-163597
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-163597-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-163597-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 23:08:51.772765  547182 out.go:360] Setting OutFile to fd 1 ...
	I1207 23:08:51.772888  547182 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:08:51.772899  547182 out.go:374] Setting ErrFile to fd 2...
	I1207 23:08:51.772906  547182 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:08:51.773167  547182 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-393577/.minikube/bin
	I1207 23:08:51.773380  547182 out.go:368] Setting JSON to false
	I1207 23:08:51.773416  547182 mustload.go:66] Loading cluster: ha-163597
	I1207 23:08:51.773488  547182 notify.go:221] Checking for updates...
	I1207 23:08:51.773826  547182 config.go:182] Loaded profile config "ha-163597": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1207 23:08:51.773844  547182 status.go:174] checking status of ha-163597 ...
	I1207 23:08:51.774345  547182 cli_runner.go:164] Run: docker container inspect ha-163597 --format={{.State.Status}}
	I1207 23:08:51.795314  547182 status.go:371] ha-163597 host status = "Stopped" (err=<nil>)
	I1207 23:08:51.795355  547182 status.go:384] host is not running, skipping remaining checks
	I1207 23:08:51.795369  547182 status.go:176] ha-163597 status: &{Name:ha-163597 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1207 23:08:51.795395  547182 status.go:174] checking status of ha-163597-m02 ...
	I1207 23:08:51.795687  547182 cli_runner.go:164] Run: docker container inspect ha-163597-m02 --format={{.State.Status}}
	I1207 23:08:51.814055  547182 status.go:371] ha-163597-m02 host status = "Stopped" (err=<nil>)
	I1207 23:08:51.814080  547182 status.go:384] host is not running, skipping remaining checks
	I1207 23:08:51.814086  547182 status.go:176] ha-163597-m02 status: &{Name:ha-163597-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1207 23:08:51.814106  547182 status.go:174] checking status of ha-163597-m04 ...
	I1207 23:08:51.814370  547182 cli_runner.go:164] Run: docker container inspect ha-163597-m04 --format={{.State.Status}}
	I1207 23:08:51.832592  547182 status.go:371] ha-163597-m04 host status = "Stopped" (err=<nil>)
	I1207 23:08:51.832630  547182 status.go:384] host is not running, skipping remaining checks
	I1207 23:08:51.832638  547182 status.go:176] ha-163597-m04 status: &{Name:ha-163597-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (32.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (72.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-163597 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker: (1m11.533817112s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (72.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (53.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 node add --control-plane --alsologtostderr -v 5
E1207 23:10:07.541472  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/addons-549698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:10:51.613438  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/functional-442811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-163597 node add --control-plane --alsologtostderr -v 5: (52.625996683s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-163597 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (53.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.92s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (23.28s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-688860 --driver=docker  --container-runtime=docker
E1207 23:11:19.319763  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/functional-442811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-688860 --driver=docker  --container-runtime=docker: (23.27805428s)
--- PASS: TestImageBuild/serial/Setup (23.28s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.14s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-688860
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-688860: (1.139524206s)
--- PASS: TestImageBuild/serial/NormalBuild (1.14s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.69s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-688860
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.69s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.49s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-688860
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.49s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.52s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-688860
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.52s)

                                                
                                    
x
+
TestJSONOutput/start/Command (63.03s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-639768 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=docker
E1207 23:12:04.470659  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/addons-549698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-639768 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=docker: (1m3.033250105s)
--- PASS: TestJSONOutput/start/Command (63.03s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.51s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-639768 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.51s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.48s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-639768 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.48s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.94s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-639768 --output=json --user=testUser
E1207 23:12:49.705956  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/functional-304107/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-639768 --output=json --user=testUser: (10.939808034s)
--- PASS: TestJSONOutput/stop/Command (10.94s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-227737 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-227737 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (83.268204ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"73cdc98f-3717-45eb-986d-bdd787264923","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-227737] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"76e00791-51f4-41bd-8aa2-6019d2ecbc3d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22054"}}
	{"specversion":"1.0","id":"5d2ea293-5335-4275-987c-d1ee5a86e45f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e48b6311-4a45-4d1e-a052-2e006fce05be","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22054-393577/kubeconfig"}}
	{"specversion":"1.0","id":"0e02b262-c810-4edd-945a-e2c1205fd5c4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-393577/.minikube"}}
	{"specversion":"1.0","id":"5642c503-2238-4338-a29d-91934e448bec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"616f07ad-4c35-4769-bf1a-fcd705ff93d6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"2c9ca577-c83e-49d5-acb5-1f6e6d348270","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-227737" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-227737
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (25.69s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-043684 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-043684 --network=: (23.506984132s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-043684" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-043684
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-043684: (2.167526712s)
--- PASS: TestKicCustomNetwork/create_custom_network (25.69s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (25.44s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-825091 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-825091 --network=bridge: (23.3730998s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-825091" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-825091
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-825091: (2.046650489s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (25.44s)

                                                
                                    
x
+
TestKicExistingNetwork (25.66s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1207 23:13:43.949259  397166 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1207 23:13:43.967940  397166 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1207 23:13:43.968025  397166 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1207 23:13:43.968052  397166 cli_runner.go:164] Run: docker network inspect existing-network
W1207 23:13:43.985403  397166 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1207 23:13:43.985442  397166 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1207 23:13:43.985458  397166 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1207 23:13:43.985663  397166 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1207 23:13:44.004687  397166 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7d1ae4c69ec1 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:4b:6e:6b:a2:f7} reservation:<nil>}
I1207 23:13:44.005160  397166 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00207efd0}
I1207 23:13:44.005187  397166 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1207 23:13:44.005234  397166 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1207 23:13:44.053807  397166 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-832789 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-832789 --network=existing-network: (23.491598326s)
helpers_test.go:175: Cleaning up "existing-network-832789" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-832789
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-832789: (2.029085393s)
I1207 23:14:09.592886  397166 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (25.66s)

                                                
                                    
x
+
TestKicCustomSubnet (23.21s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-843695 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-843695 --subnet=192.168.60.0/24: (21.020768425s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-843695 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-843695" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-843695
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-843695: (2.163452381s)
--- PASS: TestKicCustomSubnet (23.21s)

                                                
                                    
x
+
TestKicStaticIP (26.7s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-520927 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-520927 --static-ip=192.168.200.200: (24.367273946s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-520927 ip
helpers_test.go:175: Cleaning up "static-ip-520927" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-520927
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-520927: (2.176973113s)
--- PASS: TestKicStaticIP (26.70s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (53.11s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-666304 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-666304 --driver=docker  --container-runtime=docker: (23.401005295s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-669496 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-669496 --driver=docker  --container-runtime=docker: (24.146508725s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-666304
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-669496
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-669496" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-669496
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-669496: (2.147345634s)
helpers_test.go:175: Cleaning up "first-666304" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-666304
E1207 23:15:51.612866  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/functional-442811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-666304: (2.154024276s)
--- PASS: TestMinikubeProfile (53.11s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.49s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-511350 --memory=3072 --mount-string /tmp/TestMountStartserial2590489549/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-511350 --memory=3072 --mount-string /tmp/TestMountStartserial2590489549/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (5.487480796s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.49s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-511350 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.54s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-528249 --memory=3072 --mount-string /tmp/TestMountStartserial2590489549/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-528249 --memory=3072 --mount-string /tmp/TestMountStartserial2590489549/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (8.539953984s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.54s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-528249 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.53s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-511350 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-511350 --alsologtostderr -v=5: (1.533328463s)
--- PASS: TestMountStart/serial/DeleteFirst (1.53s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-528249 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-528249
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-528249: (1.253476229s)
--- PASS: TestMountStart/serial/Stop (1.25s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (9.36s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-528249
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-528249: (8.357244289s)
--- PASS: TestMountStart/serial/RestartStopped (9.36s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-528249 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (75.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-040868 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=docker
E1207 23:17:04.468490  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/addons-549698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-040868 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=docker: (1m15.06097223s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040868 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (75.57s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-040868 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-040868 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-040868 -- rollout status deployment/busybox: (3.527758421s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-040868 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-040868 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-040868 -- exec busybox-7b57f96db7-9wdwd -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-040868 -- exec busybox-7b57f96db7-qf5x5 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-040868 -- exec busybox-7b57f96db7-9wdwd -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-040868 -- exec busybox-7b57f96db7-qf5x5 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-040868 -- exec busybox-7b57f96db7-9wdwd -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-040868 -- exec busybox-7b57f96db7-qf5x5 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.23s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-040868 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-040868 -- exec busybox-7b57f96db7-9wdwd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-040868 -- exec busybox-7b57f96db7-9wdwd -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-040868 -- exec busybox-7b57f96db7-qf5x5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-040868 -- exec busybox-7b57f96db7-qf5x5 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.90s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (33.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-040868 -v=5 --alsologtostderr
E1207 23:17:49.708103  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/functional-304107/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-040868 -v=5 --alsologtostderr: (33.179412834s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040868 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (33.83s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-040868 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.68s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040868 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040868 cp testdata/cp-test.txt multinode-040868:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040868 ssh -n multinode-040868 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040868 cp multinode-040868:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1659165373/001/cp-test_multinode-040868.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040868 ssh -n multinode-040868 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040868 cp multinode-040868:/home/docker/cp-test.txt multinode-040868-m02:/home/docker/cp-test_multinode-040868_multinode-040868-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040868 ssh -n multinode-040868 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040868 ssh -n multinode-040868-m02 "sudo cat /home/docker/cp-test_multinode-040868_multinode-040868-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040868 cp multinode-040868:/home/docker/cp-test.txt multinode-040868-m03:/home/docker/cp-test_multinode-040868_multinode-040868-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040868 ssh -n multinode-040868 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040868 ssh -n multinode-040868-m03 "sudo cat /home/docker/cp-test_multinode-040868_multinode-040868-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040868 cp testdata/cp-test.txt multinode-040868-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040868 ssh -n multinode-040868-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040868 cp multinode-040868-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1659165373/001/cp-test_multinode-040868-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040868 ssh -n multinode-040868-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040868 cp multinode-040868-m02:/home/docker/cp-test.txt multinode-040868:/home/docker/cp-test_multinode-040868-m02_multinode-040868.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040868 ssh -n multinode-040868-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040868 ssh -n multinode-040868 "sudo cat /home/docker/cp-test_multinode-040868-m02_multinode-040868.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040868 cp multinode-040868-m02:/home/docker/cp-test.txt multinode-040868-m03:/home/docker/cp-test_multinode-040868-m02_multinode-040868-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040868 ssh -n multinode-040868-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040868 ssh -n multinode-040868-m03 "sudo cat /home/docker/cp-test_multinode-040868-m02_multinode-040868-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040868 cp testdata/cp-test.txt multinode-040868-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040868 ssh -n multinode-040868-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040868 cp multinode-040868-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1659165373/001/cp-test_multinode-040868-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040868 ssh -n multinode-040868-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040868 cp multinode-040868-m03:/home/docker/cp-test.txt multinode-040868:/home/docker/cp-test_multinode-040868-m03_multinode-040868.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040868 ssh -n multinode-040868-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040868 ssh -n multinode-040868 "sudo cat /home/docker/cp-test_multinode-040868-m03_multinode-040868.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040868 cp multinode-040868-m03:/home/docker/cp-test.txt multinode-040868-m02:/home/docker/cp-test_multinode-040868-m03_multinode-040868-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040868 ssh -n multinode-040868-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040868 ssh -n multinode-040868-m02 "sudo cat /home/docker/cp-test_multinode-040868-m03_multinode-040868-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.95s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040868 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-040868 node stop m03: (1.295818647s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040868 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-040868 status: exit status 7 (496.577195ms)

                                                
                                                
-- stdout --
	multinode-040868
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-040868-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-040868-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040868 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-040868 status --alsologtostderr: exit status 7 (497.462585ms)

                                                
                                                
-- stdout --
	multinode-040868
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-040868-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-040868-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 23:18:31.756947  629628 out.go:360] Setting OutFile to fd 1 ...
	I1207 23:18:31.757208  629628 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:18:31.757216  629628 out.go:374] Setting ErrFile to fd 2...
	I1207 23:18:31.757221  629628 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:18:31.757412  629628 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-393577/.minikube/bin
	I1207 23:18:31.757577  629628 out.go:368] Setting JSON to false
	I1207 23:18:31.757613  629628 mustload.go:66] Loading cluster: multinode-040868
	I1207 23:18:31.757753  629628 notify.go:221] Checking for updates...
	I1207 23:18:31.757988  629628 config.go:182] Loaded profile config "multinode-040868": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1207 23:18:31.758006  629628 status.go:174] checking status of multinode-040868 ...
	I1207 23:18:31.758524  629628 cli_runner.go:164] Run: docker container inspect multinode-040868 --format={{.State.Status}}
	I1207 23:18:31.780552  629628 status.go:371] multinode-040868 host status = "Running" (err=<nil>)
	I1207 23:18:31.780615  629628 host.go:66] Checking if "multinode-040868" exists ...
	I1207 23:18:31.780895  629628 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-040868
	I1207 23:18:31.798423  629628 host.go:66] Checking if "multinode-040868" exists ...
	I1207 23:18:31.798727  629628 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1207 23:18:31.798776  629628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-040868
	I1207 23:18:31.816618  629628 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33297 SSHKeyPath:/home/jenkins/minikube-integration/22054-393577/.minikube/machines/multinode-040868/id_rsa Username:docker}
	I1207 23:18:31.908439  629628 ssh_runner.go:195] Run: systemctl --version
	I1207 23:18:31.914952  629628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 23:18:31.927685  629628 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 23:18:31.985200  629628 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-07 23:18:31.975152688 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 23:18:31.985769  629628 kubeconfig.go:125] found "multinode-040868" server: "https://192.168.67.2:8443"
	I1207 23:18:31.985803  629628 api_server.go:166] Checking apiserver status ...
	I1207 23:18:31.985844  629628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 23:18:31.998752  629628 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2108/cgroup
	W1207 23:18:32.007347  629628 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2108/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1207 23:18:32.007444  629628 ssh_runner.go:195] Run: ls
	I1207 23:18:32.011368  629628 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1207 23:18:32.015747  629628 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1207 23:18:32.015780  629628 status.go:463] multinode-040868 apiserver status = Running (err=<nil>)
	I1207 23:18:32.015789  629628 status.go:176] multinode-040868 status: &{Name:multinode-040868 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1207 23:18:32.015805  629628 status.go:174] checking status of multinode-040868-m02 ...
	I1207 23:18:32.016042  629628 cli_runner.go:164] Run: docker container inspect multinode-040868-m02 --format={{.State.Status}}
	I1207 23:18:32.033779  629628 status.go:371] multinode-040868-m02 host status = "Running" (err=<nil>)
	I1207 23:18:32.033802  629628 host.go:66] Checking if "multinode-040868-m02" exists ...
	I1207 23:18:32.034063  629628 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-040868-m02
	I1207 23:18:32.051178  629628 host.go:66] Checking if "multinode-040868-m02" exists ...
	I1207 23:18:32.051491  629628 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1207 23:18:32.051540  629628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-040868-m02
	I1207 23:18:32.069355  629628 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33302 SSHKeyPath:/home/jenkins/minikube-integration/22054-393577/.minikube/machines/multinode-040868-m02/id_rsa Username:docker}
	I1207 23:18:32.161006  629628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 23:18:32.173540  629628 status.go:176] multinode-040868-m02 status: &{Name:multinode-040868-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1207 23:18:32.173575  629628 status.go:174] checking status of multinode-040868-m03 ...
	I1207 23:18:32.173886  629628 cli_runner.go:164] Run: docker container inspect multinode-040868-m03 --format={{.State.Status}}
	I1207 23:18:32.191876  629628 status.go:371] multinode-040868-m03 host status = "Stopped" (err=<nil>)
	I1207 23:18:32.191898  629628 status.go:384] host is not running, skipping remaining checks
	I1207 23:18:32.191905  629628 status.go:176] multinode-040868-m03 status: &{Name:multinode-040868-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.29s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040868 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-040868 node start m03 -v=5 --alsologtostderr: (7.999216137s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040868 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.71s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (69.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-040868
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-040868
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-040868: (22.863831911s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-040868 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-040868 --wait=true -v=5 --alsologtostderr: (46.68878606s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-040868
--- PASS: TestMultiNode/serial/RestartKeepsNodes (69.69s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040868 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-040868 node delete m03: (4.757323968s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040868 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.36s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040868 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-040868 stop: (21.783646637s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040868 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-040868 status: exit status 7 (105.946199ms)

                                                
                                                
-- stdout --
	multinode-040868
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-040868-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040868 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-040868 status --alsologtostderr: exit status 7 (104.496375ms)

                                                
                                                
-- stdout --
	multinode-040868
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-040868-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 23:20:17.903689  644355 out.go:360] Setting OutFile to fd 1 ...
	I1207 23:20:17.903949  644355 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:20:17.903959  644355 out.go:374] Setting ErrFile to fd 2...
	I1207 23:20:17.903963  644355 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:20:17.904167  644355 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-393577/.minikube/bin
	I1207 23:20:17.904345  644355 out.go:368] Setting JSON to false
	I1207 23:20:17.904374  644355 mustload.go:66] Loading cluster: multinode-040868
	I1207 23:20:17.904469  644355 notify.go:221] Checking for updates...
	I1207 23:20:17.904887  644355 config.go:182] Loaded profile config "multinode-040868": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1207 23:20:17.904920  644355 status.go:174] checking status of multinode-040868 ...
	I1207 23:20:17.905532  644355 cli_runner.go:164] Run: docker container inspect multinode-040868 --format={{.State.Status}}
	I1207 23:20:17.925182  644355 status.go:371] multinode-040868 host status = "Stopped" (err=<nil>)
	I1207 23:20:17.925206  644355 status.go:384] host is not running, skipping remaining checks
	I1207 23:20:17.925213  644355 status.go:176] multinode-040868 status: &{Name:multinode-040868 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1207 23:20:17.925251  644355 status.go:174] checking status of multinode-040868-m02 ...
	I1207 23:20:17.925520  644355 cli_runner.go:164] Run: docker container inspect multinode-040868-m02 --format={{.State.Status}}
	I1207 23:20:17.944192  644355 status.go:371] multinode-040868-m02 host status = "Stopped" (err=<nil>)
	I1207 23:20:17.944221  644355 status.go:384] host is not running, skipping remaining checks
	I1207 23:20:17.944230  644355 status.go:176] multinode-040868-m02 status: &{Name:multinode-040868-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.99s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (49.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-040868 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=docker
E1207 23:20:51.612950  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/functional-442811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:20:52.771401  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/functional-304107/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-040868 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=docker: (48.989422678s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040868 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (49.59s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (26.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-040868
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-040868-m02 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-040868-m02 --driver=docker  --container-runtime=docker: exit status 14 (82.255915ms)

                                                
                                                
-- stdout --
	* [multinode-040868-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22054
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22054-393577/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-393577/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-040868-m02' is duplicated with machine name 'multinode-040868-m02' in profile 'multinode-040868'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-040868-m03 --driver=docker  --container-runtime=docker
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-040868-m03 --driver=docker  --container-runtime=docker: (23.968943646s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-040868
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-040868: exit status 80 (292.549426ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-040868 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-040868-m03 already exists in multinode-040868-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-040868-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-040868-m03: (2.220819322s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (26.63s)

                                                
                                    
x
+
TestPreload (104.85s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:45: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-740557 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker
E1207 23:22:04.468885  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/addons-549698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:22:14.682119  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/functional-442811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:45: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-740557 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker: (43.388082604s)
preload_test.go:53: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-740557 image pull gcr.io/k8s-minikube/busybox
preload_test.go:53: (dbg) Done: out/minikube-linux-amd64 -p test-preload-740557 image pull gcr.io/k8s-minikube/busybox: (2.155084215s)
preload_test.go:59: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-740557
preload_test.go:59: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-740557: (10.908323742s)
preload_test.go:67: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-740557 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
E1207 23:22:49.705726  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/functional-304107/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:67: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-740557 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (45.909102091s)
preload_test.go:72: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-740557 image list
helpers_test.go:175: Cleaning up "test-preload-740557" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-740557
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-740557: (2.253718263s)
--- PASS: TestPreload (104.85s)

                                                
                                    
x
+
TestScheduledStopUnix (98.79s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-875950 --memory=3072 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-875950 --memory=3072 --driver=docker  --container-runtime=docker: (25.595542627s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-875950 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1207 23:23:48.804910  669337 out.go:360] Setting OutFile to fd 1 ...
	I1207 23:23:48.805187  669337 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:23:48.805197  669337 out.go:374] Setting ErrFile to fd 2...
	I1207 23:23:48.805202  669337 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:23:48.805385  669337 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-393577/.minikube/bin
	I1207 23:23:48.805686  669337 out.go:368] Setting JSON to false
	I1207 23:23:48.805780  669337 mustload.go:66] Loading cluster: scheduled-stop-875950
	I1207 23:23:48.806123  669337 config.go:182] Loaded profile config "scheduled-stop-875950": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1207 23:23:48.806188  669337 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/scheduled-stop-875950/config.json ...
	I1207 23:23:48.806365  669337 mustload.go:66] Loading cluster: scheduled-stop-875950
	I1207 23:23:48.806459  669337 config.go:182] Loaded profile config "scheduled-stop-875950": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-875950 -n scheduled-stop-875950
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-875950 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1207 23:23:49.196557  669485 out.go:360] Setting OutFile to fd 1 ...
	I1207 23:23:49.196679  669485 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:23:49.196688  669485 out.go:374] Setting ErrFile to fd 2...
	I1207 23:23:49.196692  669485 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:23:49.196897  669485 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-393577/.minikube/bin
	I1207 23:23:49.197143  669485 out.go:368] Setting JSON to false
	I1207 23:23:49.197339  669485 daemonize_unix.go:73] killing process 669370 as it is an old scheduled stop
	I1207 23:23:49.197445  669485 mustload.go:66] Loading cluster: scheduled-stop-875950
	I1207 23:23:49.197870  669485 config.go:182] Loaded profile config "scheduled-stop-875950": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1207 23:23:49.197965  669485 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/scheduled-stop-875950/config.json ...
	I1207 23:23:49.198188  669485 mustload.go:66] Loading cluster: scheduled-stop-875950
	I1207 23:23:49.198314  669485 config.go:182] Loaded profile config "scheduled-stop-875950": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1207 23:23:49.203954  397166 retry.go:31] will retry after 74.363µs: open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/scheduled-stop-875950/pid: no such file or directory
I1207 23:23:49.205099  397166 retry.go:31] will retry after 111.345µs: open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/scheduled-stop-875950/pid: no such file or directory
I1207 23:23:49.206252  397166 retry.go:31] will retry after 332.581µs: open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/scheduled-stop-875950/pid: no such file or directory
I1207 23:23:49.207388  397166 retry.go:31] will retry after 211.908µs: open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/scheduled-stop-875950/pid: no such file or directory
I1207 23:23:49.208563  397166 retry.go:31] will retry after 547.619µs: open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/scheduled-stop-875950/pid: no such file or directory
I1207 23:23:49.209717  397166 retry.go:31] will retry after 947.98µs: open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/scheduled-stop-875950/pid: no such file or directory
I1207 23:23:49.210864  397166 retry.go:31] will retry after 787.231µs: open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/scheduled-stop-875950/pid: no such file or directory
I1207 23:23:49.211992  397166 retry.go:31] will retry after 1.356142ms: open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/scheduled-stop-875950/pid: no such file or directory
I1207 23:23:49.214200  397166 retry.go:31] will retry after 2.090832ms: open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/scheduled-stop-875950/pid: no such file or directory
I1207 23:23:49.217396  397166 retry.go:31] will retry after 3.88617ms: open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/scheduled-stop-875950/pid: no such file or directory
I1207 23:23:49.221618  397166 retry.go:31] will retry after 4.097772ms: open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/scheduled-stop-875950/pid: no such file or directory
I1207 23:23:49.226844  397166 retry.go:31] will retry after 9.633522ms: open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/scheduled-stop-875950/pid: no such file or directory
I1207 23:23:49.237083  397166 retry.go:31] will retry after 18.620875ms: open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/scheduled-stop-875950/pid: no such file or directory
I1207 23:23:49.256318  397166 retry.go:31] will retry after 23.013092ms: open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/scheduled-stop-875950/pid: no such file or directory
I1207 23:23:49.279651  397166 retry.go:31] will retry after 21.539346ms: open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/scheduled-stop-875950/pid: no such file or directory
I1207 23:23:49.301908  397166 retry.go:31] will retry after 41.663737ms: open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/scheduled-stop-875950/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-875950 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-875950 -n scheduled-stop-875950
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-875950
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-875950 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1207 23:24:15.115689  670417 out.go:360] Setting OutFile to fd 1 ...
	I1207 23:24:15.115828  670417 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:24:15.115838  670417 out.go:374] Setting ErrFile to fd 2...
	I1207 23:24:15.115842  670417 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:24:15.116052  670417 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-393577/.minikube/bin
	I1207 23:24:15.116336  670417 out.go:368] Setting JSON to false
	I1207 23:24:15.116416  670417 mustload.go:66] Loading cluster: scheduled-stop-875950
	I1207 23:24:15.116770  670417 config.go:182] Loaded profile config "scheduled-stop-875950": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1207 23:24:15.116844  670417 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/scheduled-stop-875950/config.json ...
	I1207 23:24:15.117042  670417 mustload.go:66] Loading cluster: scheduled-stop-875950
	I1207 23:24:15.117155  670417 config.go:182] Loaded profile config "scheduled-stop-875950": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-875950
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-875950: exit status 7 (84.241047ms)

                                                
                                                
-- stdout --
	scheduled-stop-875950
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-875950 -n scheduled-stop-875950
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-875950 -n scheduled-stop-875950: exit status 7 (81.017017ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-875950" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-875950
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-875950: (1.648565159s)
--- PASS: TestScheduledStopUnix (98.79s)

                                                
                                    
x
+
TestSkaffold (84.06s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe72310230 version
skaffold_test.go:63: skaffold version: v2.17.0
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-754134 --memory=3072 --driver=docker  --container-runtime=docker
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-754134 --memory=3072 --driver=docker  --container-runtime=docker: (24.221584756s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/Docker_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe72310230 run --minikube-profile skaffold-754134 --kube-context skaffold-754134 --status-check=true --port-forward=false --interactive=false
E1207 23:25:51.612292  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/functional-442811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe72310230 run --minikube-profile skaffold-754134 --kube-context skaffold-754134 --status-check=true --port-forward=false --interactive=false: (41.979417351s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:352: "leeroy-app-6654545864-r44z9" [8b363abc-5358-43e0-8a20-5684267e8bc7] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.004068048s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:352: "leeroy-web-7b7566484f-ml2gd" [e0501dc4-1afe-453e-82dd-3c133f3aeb7e] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.003947756s
helpers_test.go:175: Cleaning up "skaffold-754134" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-754134
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-754134: (3.2156484s)
--- PASS: TestSkaffold (84.06s)

                                                
                                    
x
+
TestInsufficientStorage (12.26s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-557646 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-557646 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (9.955288369s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7e190872-fdb8-43c8-9d23-e6c3dbf67d80","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-557646] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9e0c08dc-951f-46af-882c-b0a9d72ef072","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22054"}}
	{"specversion":"1.0","id":"ffa15bc0-997e-4acc-be7f-b1fdf7bf1c43","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0e9ab7fc-a08e-46e0-80d8-e270cb1d3337","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22054-393577/kubeconfig"}}
	{"specversion":"1.0","id":"3b497f3c-f488-49e4-9bdb-c940823315ed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-393577/.minikube"}}
	{"specversion":"1.0","id":"d838dead-3a0d-489a-99ae-fd34b51aca07","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"d61b10af-bc20-4e42-8a81-e544ed02febe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"05ca0ea6-edab-4928-a561-beadfd8048b0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"4bda742e-3849-413c-a541-eae2b2f4dba8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"c7c1fb5d-1aea-4b0e-aab5-39f5b2c4c28b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"8148685a-94f2-4c63-bb17-ad7441d25a0e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"2ceecd34-6a31-414b-b6b6-c6bba0025d03","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-557646\" primary control-plane node in \"insufficient-storage-557646\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"658ae2e4-e36d-4b8e-b7a5-71794cfe7e24","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1764843390-22032 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"d4394d47-cb5f-423a-b5d2-6c389210420e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"c2190ff3-e25a-47a5-aba8-277248fb6115","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-557646 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-557646 --output=json --layout=cluster: exit status 7 (295.790027ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-557646","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-557646","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1207 23:26:36.234839  682420 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-557646" does not appear in /home/jenkins/minikube-integration/22054-393577/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-557646 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-557646 --output=json --layout=cluster: exit status 7 (286.43937ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-557646","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-557646","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1207 23:26:36.521861  682537 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-557646" does not appear in /home/jenkins/minikube-integration/22054-393577/kubeconfig
	E1207 23:26:36.532209  682537 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/insufficient-storage-557646/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-557646" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-557646
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-557646: (1.725772942s)
--- PASS: TestInsufficientStorage (12.26s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (345.75s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.1814024274 start -p running-upgrade-228932 --memory=3072 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.1814024274 start -p running-upgrade-228932 --memory=3072 --vm-driver=docker  --container-runtime=docker: (53.606469933s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-228932 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-228932 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m46.682838953s)
helpers_test.go:175: Cleaning up "running-upgrade-228932" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-228932
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-228932: (2.197473768s)
--- PASS: TestRunningBinaryUpgrade (345.75s)

                                                
                                    
x
+
TestKubernetesUpgrade (322.11s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-142155 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-142155 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (28.347829952s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-142155
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-142155: (1.949283437s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-142155 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-142155 status --format={{.Host}}: exit status 7 (89.408233ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-142155 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-142155 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m24.970813719s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-142155 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-142155 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-142155 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker: exit status 106 (93.796802ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-142155] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22054
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22054-393577/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-393577/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0-beta.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-142155
	    minikube start -p kubernetes-upgrade-142155 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1421552 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-142155 --kubernetes-version=v1.35.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-142155 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E1207 23:33:55.616645  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/skaffold-754134/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-142155 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (24.018340491s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-142155" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-142155
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-142155: (2.573164716s)
--- PASS: TestKubernetesUpgrade (322.11s)

                                                
                                    
x
+
TestMissingContainerUpgrade (114.34s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.3917357325 start -p missing-upgrade-215648 --memory=3072 --driver=docker  --container-runtime=docker
E1207 23:26:47.543535  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/addons-549698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:27:04.468741  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/addons-549698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.3917357325 start -p missing-upgrade-215648 --memory=3072 --driver=docker  --container-runtime=docker: (53.068361922s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-215648
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-215648: (10.415210902s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-215648
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-215648 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-215648 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (44.862738505s)
helpers_test.go:175: Cleaning up "missing-upgrade-215648" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-215648
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-215648: (2.356827527s)
--- PASS: TestMissingContainerUpgrade (114.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-529070 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-529070 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker: exit status 14 (81.526047ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-529070] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22054
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22054-393577/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-393577/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (41.64s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-529070 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-529070 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (41.289759301s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-529070 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (41.64s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (16.68s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-529070 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-529070 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (14.453288893s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-529070 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-529070 status -o json: exit status 2 (351.149607ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-529070","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-529070
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-529070: (1.877359477s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (16.68s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.81s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-529070 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-529070 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (8.806734506s)
--- PASS: TestNoKubernetes/serial/Start (8.81s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22054-393577/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-529070 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-529070 "sudo systemctl is-active --quiet service kubelet": exit status 1 (296.963403ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (6.72s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
E1207 23:27:49.705749  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/functional-304107/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:194: (dbg) Done: out/minikube-linux-amd64 profile list: (5.997861722s)
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (6.72s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-529070
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-529070: (1.322247266s)
--- PASS: TestNoKubernetes/serial/Stop (1.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.59s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-529070 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-529070 --driver=docker  --container-runtime=docker: (8.586952895s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.59s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-529070 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-529070 "sudo systemctl is-active --quiet service kubelet": exit status 1 (283.605168ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.42s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.42s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (297.5s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.1106359090 start -p stopped-upgrade-549010 --memory=3072 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.1106359090 start -p stopped-upgrade-549010 --memory=3072 --vm-driver=docker  --container-runtime=docker: (25.159355002s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.1106359090 -p stopped-upgrade-549010 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.1106359090 -p stopped-upgrade-549010 stop: (10.790275717s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-549010 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E1207 23:30:51.612926  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/functional-442811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:31:11.756169  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/skaffold-754134/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:31:11.762645  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/skaffold-754134/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:31:11.774106  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/skaffold-754134/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:31:11.795567  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/skaffold-754134/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:31:11.837120  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/skaffold-754134/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:31:11.918584  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/skaffold-754134/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:31:12.080166  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/skaffold-754134/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:31:12.402015  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/skaffold-754134/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:31:13.043477  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/skaffold-754134/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:31:14.325815  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/skaffold-754134/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:31:16.887671  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/skaffold-754134/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:31:22.009172  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/skaffold-754134/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:31:32.251196  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/skaffold-754134/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:31:52.733426  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/skaffold-754134/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-549010 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m21.553839576s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (297.50s)

                                                
                                    
x
+
TestPause/serial/Start (70.57s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-037325 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-037325 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (1m10.565124952s)
--- PASS: TestPause/serial/Start (70.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (66.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-541552 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-541552 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (1m6.231556529s)
--- PASS: TestNetworkPlugins/group/auto/Start (66.23s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (43.01s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-037325 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-037325 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (42.98669275s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (43.01s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.13s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-549010
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-549010: (1.127428148s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-541552 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (54.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-541552 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
I1207 23:34:08.092343  397166 config.go:182] Loaded profile config "auto-541552": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-541552 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (54.762715177s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (54.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-541552 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-4xvhr" [658de759-815a-4c34-a47a-cbefe363e484] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-4xvhr" [658de759-815a-4c34-a47a-cbefe363e484] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.00408202s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (62.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-541552 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-541552 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m2.168194182s)
--- PASS: TestNetworkPlugins/group/calico/Start (62.17s)

                                                
                                    
x
+
TestPause/serial/Pause (0.55s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-037325 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.55s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.36s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-037325 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-037325 --output=json --layout=cluster: exit status 2 (361.768181ms)

                                                
                                                
-- stdout --
	{"Name":"pause-037325","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-037325","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.36s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.58s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-037325 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.58s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.64s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-037325 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.64s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.37s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-037325 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-037325 --alsologtostderr -v=5: (2.371515714s)
--- PASS: TestPause/serial/DeletePaused (2.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-541552 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-541552 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-541552 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.81s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-037325
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-037325: exit status 1 (20.816842ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-037325: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (47.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-541552 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-541552 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (47.126191574s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (47.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (64.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-541552 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-541552 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (1m4.066093755s)
--- PASS: TestNetworkPlugins/group/false/Start (64.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-pnpvp" [e82daf38-8b33-452b-ab6e-8370ad97e600] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003777978s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-541552 "pgrep -a kubelet"
I1207 23:35:08.986939  397166 config.go:182] Loaded profile config "kindnet-541552": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-541552 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-gmqgj" [6f486a98-11a8-4094-a3b3-48f57fd69594] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-gmqgj" [6f486a98-11a8-4094-a3b3-48f57fd69594] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.003475499s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-541552 "pgrep -a kubelet"
I1207 23:35:10.582507  397166 config.go:182] Loaded profile config "custom-flannel-541552": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-541552 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-8vnqs" [06b7d973-7b14-47b1-8b94-e78408e0c8f6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-8vnqs" [06b7d973-7b14-47b1-8b94-e78408e0c8f6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.004582563s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-xwknp" [89042e47-8e65-48c9-b77c-128984621487] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003955151s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-541552 "pgrep -a kubelet"
I1207 23:35:17.973878  397166 config.go:182] Loaded profile config "calico-541552": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-541552 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-f2cxv" [be19deba-1b78-47ea-b4d4-83b1ab02871d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-f2cxv" [be19deba-1b78-47ea-b4d4-83b1ab02871d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.004261791s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-541552 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-541552 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-541552 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-541552 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-541552 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-541552 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-541552 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-541552 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-541552 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (68.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-541552 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-541552 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (1m8.010829551s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (68.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (42.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-541552 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-541552 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (42.500420672s)
--- PASS: TestNetworkPlugins/group/flannel/Start (42.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-541552 "pgrep -a kubelet"
I1207 23:35:48.381015  397166 config.go:182] Loaded profile config "false-541552": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (10.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-541552 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-p788p" [0b147785-f509-46e4-9581-ed54de1a9a56] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-p788p" [0b147785-f509-46e4-9581-ed54de1a9a56] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 10.003906398s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (10.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (68.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-541552 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
E1207 23:35:51.612227  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/functional-442811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-541552 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (1m8.458648906s)
--- PASS: TestNetworkPlugins/group/bridge/Start (68.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-541552 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-541552 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-541552 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (67.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-541552 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-541552 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (1m7.236865107s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (67.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-pkh7k" [2c8271af-0187-4569-a7e7-4d02141cb00b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003990313s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-541552 "pgrep -a kubelet"
I1207 23:36:30.922283  397166 config.go:182] Loaded profile config "flannel-541552": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-541552 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-5lvpk" [a3075eab-3d27-4f74-8dab-982a81fec086] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-5lvpk" [a3075eab-3d27-4f74-8dab-982a81fec086] Running
E1207 23:36:39.458919  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/skaffold-754134/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.0037771s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-541552 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-541552 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-541552 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-541552 "pgrep -a kubelet"
I1207 23:36:49.182316  397166 config.go:182] Loaded profile config "enable-default-cni-541552": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-541552 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-f7drb" [38255dc2-2539-4a03-8c3a-9a951f07768e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-f7drb" [38255dc2-2539-4a03-8c3a-9a951f07768e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.004957743s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-541552 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-541552 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-541552 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-541552 "pgrep -a kubelet"
I1207 23:36:59.888740  397166 config.go:182] Loaded profile config "bridge-541552": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (8.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-541552 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-k7f9v" [cde3bf62-74bd-4223-aa62-09175a0fbca7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-k7f9v" [cde3bf62-74bd-4223-aa62-09175a0fbca7] Running
E1207 23:37:04.468336  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/addons-549698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 8.102886561s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (8.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (43.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-147700 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-147700 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0: (43.031943225s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (43.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-541552 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-541552 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-541552 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (42.74s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-772229 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-772229 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-beta.0: (42.740864428s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (42.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-541552 "pgrep -a kubelet"
I1207 23:37:29.595285  397166 config.go:182] Loaded profile config "kubenet-541552": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (12.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-541552 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-47cgf" [86e15ee0-2fa2-4314-a70b-f8faf014fdb7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-47cgf" [86e15ee0-2fa2-4314-a70b-f8faf014fdb7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 12.004482676s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (12.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (69.61s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-975929 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.2
E1207 23:37:32.773645  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/functional-304107/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-975929 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.2: (1m9.608058781s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (69.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-541552 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-541552 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-541552 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-147700 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [833739fe-21dc-4d78-b779-2f396cbc740a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [833739fe-21dc-4d78-b779-2f396cbc740a] Running
E1207 23:37:49.705789  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/functional-304107/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.004174737s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-147700 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-147700 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-147700 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.85s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-147700 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-147700 --alsologtostderr -v=3: (11.84997809s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.85s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-772229 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [82a8a4d0-f517-4146-a309-c0ec87df732b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [82a8a4d0-f517-4146-a309-c0ec87df732b] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.00456197s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-772229 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (72.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-338372 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-338372 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.2: (1m12.10547122s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (72.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-147700 -n old-k8s-version-147700
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-147700 -n old-k8s-version-147700: exit status 7 (100.99364ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-147700 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (47.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-147700 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-147700 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0: (47.072890586s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-147700 -n old-k8s-version-147700
E1207 23:38:54.684424  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/functional-442811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (47.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.9s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-772229 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-772229 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.90s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-772229 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-772229 --alsologtostderr -v=3: (11.115665599s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-772229 -n no-preload-772229
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-772229 -n no-preload-772229: exit status 7 (113.87028ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-772229 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (49.73s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-772229 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-772229 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-beta.0: (49.192847283s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-772229 -n no-preload-772229
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (49.73s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-975929 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [f828aa6a-73d5-4b01-b939-eadc68957164] Pending
helpers_test.go:352: "busybox" [f828aa6a-73d5-4b01-b939-eadc68957164] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [f828aa6a-73d5-4b01-b939-eadc68957164] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.003731863s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-975929 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.82s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-975929 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-975929 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.82s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-975929 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-975929 --alsologtostderr -v=3: (11.050755906s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-gb8kg" [41ac3f1b-62b2-41d7-acd1-3244f7785bf3] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00378861s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-gb8kg" [41ac3f1b-62b2-41d7-acd1-3244f7785bf3] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003612103s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-147700 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-975929 -n embed-certs-975929
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-975929 -n embed-certs-975929: exit status 7 (89.799403ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-975929 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (51.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-975929 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-975929 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.2: (50.911046397s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-975929 -n embed-certs-975929
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (51.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-147700 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.64s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-147700 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-147700 -n old-k8s-version-147700
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-147700 -n old-k8s-version-147700: exit status 2 (346.537553ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-147700 -n old-k8s-version-147700
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-147700 -n old-k8s-version-147700: exit status 2 (336.511575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-147700 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-147700 -n old-k8s-version-147700
E1207 23:39:08.293053  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/auto-541552/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:39:08.299460  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/auto-541552/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:39:08.310826  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/auto-541552/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:39:08.333133  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/auto-541552/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:39:08.375301  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/auto-541552/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:39:08.457433  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/auto-541552/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-147700 -n old-k8s-version-147700
E1207 23:39:08.619420  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/auto-541552/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.64s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (26.83s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-105400 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-beta.0
E1207 23:39:13.427480  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/auto-541552/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-105400 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-beta.0: (26.829739516s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (26.83s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-b84665fb8-fwk5w" [bca9f343-52c1-4228-b646-0854cb40b764] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.013155961s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-338372 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [283aa357-8878-40e2-bc07-0cf4b7076d58] Pending
helpers_test.go:352: "busybox" [283aa357-8878-40e2-bc07-0cf4b7076d58] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1207 23:39:18.549080  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/auto-541552/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [283aa357-8878-40e2-bc07-0cf4b7076d58] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.004153329s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-338372 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-b84665fb8-fwk5w" [bca9f343-52c1-4228-b646-0854cb40b764] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003695671s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-772229 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.88s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-338372 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-338372 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.88s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-772229 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-338372 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-338372 --alsologtostderr -v=3: (11.15460371s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.91s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-772229 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-772229 -n no-preload-772229
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-772229 -n no-preload-772229: exit status 2 (391.421074ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-772229 -n no-preload-772229
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-772229 -n no-preload-772229: exit status 2 (365.035119ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-772229 --alsologtostderr -v=1
E1207 23:39:28.791072  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/auto-541552/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-772229 -n no-preload-772229
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-772229 -n no-preload-772229
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.91s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-338372 -n default-k8s-diff-port-338372
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-338372 -n default-k8s-diff-port-338372: exit status 7 (116.403534ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-338372 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (50.86s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-338372 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-338372 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.2: (50.52503089s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-338372 -n default-k8s-diff-port-338372
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (50.86s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.84s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-105400 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.84s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.99s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-105400 --alsologtostderr -v=3
E1207 23:39:49.272987  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/auto-541552/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-105400 --alsologtostderr -v=3: (10.994369295s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.99s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-105400 -n newest-cni-105400
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-105400 -n newest-cni-105400: exit status 7 (85.748929ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-105400 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (13.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-105400 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-105400 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-beta.0: (12.912729597s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-105400 -n newest-cni-105400
E1207 23:40:03.942419  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/kindnet-541552/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (13.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-6gkc5" [c1c59a24-c68d-4405-abcc-ac9e412b00be] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003648717s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-6gkc5" [c1c59a24-c68d-4405-abcc-ac9e412b00be] Running
E1207 23:40:02.654007  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/kindnet-541552/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:40:02.660979  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/kindnet-541552/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:40:02.672388  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/kindnet-541552/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:40:02.693777  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/kindnet-541552/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:40:02.735203  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/kindnet-541552/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:40:02.816803  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/kindnet-541552/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:40:02.978349  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/kindnet-541552/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:40:03.300716  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/kindnet-541552/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004303492s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-975929 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-105400 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.69s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-105400 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-105400 -n newest-cni-105400
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-105400 -n newest-cni-105400: exit status 2 (375.393666ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-105400 -n newest-cni-105400
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-105400 -n newest-cni-105400: exit status 2 (355.164938ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-105400 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-105400 -n newest-cni-105400
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-105400 -n newest-cni-105400
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.69s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-975929 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.84s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-975929 --alsologtostderr -v=1
E1207 23:40:05.225202  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/kindnet-541552/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-975929 -n embed-certs-975929
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-975929 -n embed-certs-975929: exit status 2 (347.050812ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-975929 -n embed-certs-975929
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-975929 -n embed-certs-975929: exit status 2 (402.251741ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-975929 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-975929 -n embed-certs-975929
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-975929 -n embed-certs-975929
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.84s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-f26gc" [9f747545-8f72-47ab-9ae7-1ea9b35e6008] Running
E1207 23:40:30.235112  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/auto-541552/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:40:31.293842  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/custom-flannel-541552/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:40:32.170377  397166 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/calico-541552/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003905841s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-f26gc" [9f747545-8f72-47ab-9ae7-1ea9b35e6008] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003662486s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-338372 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-338372 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.41s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-338372 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-338372 -n default-k8s-diff-port-338372
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-338372 -n default-k8s-diff-port-338372: exit status 2 (319.763289ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-338372 -n default-k8s-diff-port-338372
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-338372 -n default-k8s-diff-port-338372: exit status 2 (313.61379ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-338372 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-338372 -n default-k8s-diff-port-338372
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-338372 -n default-k8s-diff-port-338372
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.41s)

                                                
                                    

Test skip (29/434)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
16 TestDownloadOnly/v1.34.2/kubectl 0
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
117 TestFunctional/parallel/PodmanEnv 0
149 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0
150 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
151 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0
213 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
244 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0
245 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
246 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0
263 TestGvisorAddon 0
292 TestImageBuild/serial/validateImageBuildWithBuildEnv 0
293 TestISOImage 0
357 TestChangeNoneUser 0
360 TestScheduledStopWindows 0
390 TestNetworkPlugins/group/cilium 4.33
398 TestStartStop/group/disable-driver-mounts 0.18
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-541552 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-541552

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-541552

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-541552

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-541552

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-541552

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-541552

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-541552

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-541552

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-541552

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-541552

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-541552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541552"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-541552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541552"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-541552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541552"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-541552

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-541552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541552"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-541552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541552"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-541552" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-541552" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-541552" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-541552" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-541552" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-541552" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-541552" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-541552" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-541552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541552"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-541552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541552"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-541552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541552"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-541552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541552"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-541552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541552"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-541552

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-541552

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-541552" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-541552" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-541552

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-541552

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-541552" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-541552" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-541552" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-541552" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-541552" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-541552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541552"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-541552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541552"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-541552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541552"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-541552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541552"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-541552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541552"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22054-393577/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 07 Dec 2025 23:27:33 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.103.2:8443
name: missing-upgrade-215648
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22054-393577/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 07 Dec 2025 23:28:08 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: running-upgrade-228932
contexts:
- context:
cluster: missing-upgrade-215648
extensions:
- extension:
last-update: Sun, 07 Dec 2025 23:27:33 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: missing-upgrade-215648
name: missing-upgrade-215648
- context:
cluster: running-upgrade-228932
user: running-upgrade-228932
name: running-upgrade-228932
current-context: running-upgrade-228932
kind: Config
users:
- name: missing-upgrade-215648
user:
client-certificate: /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/missing-upgrade-215648/client.crt
client-key: /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/missing-upgrade-215648/client.key
- name: running-upgrade-228932
user:
client-certificate: /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/running-upgrade-228932/client.crt
client-key: /home/jenkins/minikube-integration/22054-393577/.minikube/profiles/running-upgrade-228932/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-541552

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-541552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541552"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-541552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541552"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-541552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541552"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-541552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541552"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-541552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541552"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-541552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541552"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-541552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541552"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-541552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541552"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-541552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541552"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-541552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541552"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-541552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541552"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-541552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541552"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-541552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541552"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-541552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541552"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-541552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541552"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-541552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541552"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-541552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541552"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-541552" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541552"

                                                
                                                
----------------------- debugLogs end: cilium-541552 [took: 4.135398739s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-541552" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-541552
--- SKIP: TestNetworkPlugins/group/cilium (4.33s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-204822" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-204822
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
Copied to clipboard