Test Report: Docker_Linux_docker_arm64 21508

                    
                      8932374f20a738e68cf28dc9e127463468f1eb30:2025-09-08:41334
                    
                

Test fail (3/347)

Order failed test Duration
91 TestFunctional/parallel/DashboardCmd 302.27
98 TestFunctional/parallel/ServiceCmdConnect 603.78
100 TestFunctional/parallel/PersistentVolumeClaim 249.19
x
+
TestFunctional/parallel/DashboardCmd (302.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-140475 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-140475 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-140475 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-140475 --alsologtostderr -v=1] stderr:
I0908 12:35:57.925537  318744 out.go:360] Setting OutFile to fd 1 ...
I0908 12:35:57.927656  318744 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 12:35:57.927677  318744 out.go:374] Setting ErrFile to fd 2...
I0908 12:35:57.927685  318744 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 12:35:57.927948  318744 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-272936/.minikube/bin
I0908 12:35:57.928263  318744 mustload.go:65] Loading cluster: functional-140475
I0908 12:35:57.928726  318744 config.go:182] Loaded profile config "functional-140475": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0908 12:35:57.929175  318744 cli_runner.go:164] Run: docker container inspect functional-140475 --format={{.State.Status}}
I0908 12:35:57.955972  318744 host.go:66] Checking if "functional-140475" exists ...
I0908 12:35:57.956332  318744 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0908 12:35:58.047560  318744 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-09-08 12:35:58.033782533 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I0908 12:35:58.047698  318744 api_server.go:166] Checking apiserver status ...
I0908 12:35:58.047784  318744 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0908 12:35:58.047850  318744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-140475
I0908 12:35:58.074638  318744 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21508-272936/.minikube/machines/functional-140475/id_rsa Username:docker}
I0908 12:35:58.168040  318744 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/9319/cgroup
I0908 12:35:58.177860  318744 api_server.go:182] apiserver freezer: "12:freezer:/docker/f779030ae61f3535d0f5115329c87b778b458ec5edcb503f2cd898753ed14341/kubepods/burstable/pod8ea3c91dd17a62bb92f198d336979d84/264ed758e3516ada447c6424a841ddcd7554b019586869f939d8214171d797e9"
I0908 12:35:58.177972  318744 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/f779030ae61f3535d0f5115329c87b778b458ec5edcb503f2cd898753ed14341/kubepods/burstable/pod8ea3c91dd17a62bb92f198d336979d84/264ed758e3516ada447c6424a841ddcd7554b019586869f939d8214171d797e9/freezer.state
I0908 12:35:58.195151  318744 api_server.go:204] freezer state: "THAWED"
I0908 12:35:58.195191  318744 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
I0908 12:35:58.208897  318744 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
ok
W0908 12:35:58.208934  318744 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I0908 12:35:58.209125  318744 config.go:182] Loaded profile config "functional-140475": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0908 12:35:58.209149  318744 addons.go:69] Setting dashboard=true in profile "functional-140475"
I0908 12:35:58.209166  318744 addons.go:238] Setting addon dashboard=true in "functional-140475"
I0908 12:35:58.209202  318744 host.go:66] Checking if "functional-140475" exists ...
I0908 12:35:58.209605  318744 cli_runner.go:164] Run: docker container inspect functional-140475 --format={{.State.Status}}
I0908 12:35:58.320481  318744 out.go:179]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I0908 12:35:58.323434  318744 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0908 12:35:58.326248  318744 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0908 12:35:58.326279  318744 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0908 12:35:58.326346  318744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-140475
I0908 12:35:58.401589  318744 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21508-272936/.minikube/machines/functional-140475/id_rsa Username:docker}
I0908 12:35:58.524015  318744 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0908 12:35:58.524036  318744 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0908 12:35:58.549001  318744 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0908 12:35:58.549058  318744 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0908 12:35:58.588850  318744 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0908 12:35:58.588871  318744 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0908 12:35:58.615997  318744 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0908 12:35:58.616018  318744 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I0908 12:35:58.642583  318744 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I0908 12:35:58.642604  318744 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0908 12:35:58.679927  318744 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0908 12:35:58.679950  318744 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0908 12:35:58.713796  318744 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0908 12:35:58.713822  318744 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0908 12:35:58.748263  318744 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0908 12:35:58.748294  318744 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0908 12:35:58.800232  318744 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0908 12:35:58.800253  318744 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0908 12:35:58.839475  318744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0908 12:35:59.918776  318744 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.079253649s)
I0908 12:35:59.921969  318744 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-140475 addons enable metrics-server

                                                
                                                
I0908 12:35:59.924793  318744 addons.go:201] Writing out "functional-140475" config to set dashboard=true...
W0908 12:35:59.925092  318744 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I0908 12:35:59.925823  318744 kapi.go:59] client config for functional-140475: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21508-272936/.minikube/profiles/functional-140475/client.crt", KeyFile:"/home/jenkins/minikube-integration/21508-272936/.minikube/profiles/functional-140475/client.key", CAFile:"/home/jenkins/minikube-integration/21508-272936/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f2d7d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0908 12:35:59.926525  318744 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I0908 12:35:59.926543  318744 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I0908 12:35:59.926549  318744 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I0908 12:35:59.926560  318744 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I0908 12:35:59.926567  318744 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I0908 12:35:59.947647  318744 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  24b19c89-8a86-490e-8313-c0d41198ef4f 1566 0 2025-09-08 12:35:59 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-09-08 12:35:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.96.157.142,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.96.157.142],IPFamilies:[IPv4],AllocateLoadBalance
rNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W0908 12:35:59.947835  318744 out.go:285] * Launching proxy ...
* Launching proxy ...
I0908 12:35:59.947969  318744 dashboard.go:152] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-140475 proxy --port 36195]
I0908 12:35:59.948316  318744 dashboard.go:157] Waiting for kubectl to output host:port ...
I0908 12:36:00.083596  318744 dashboard.go:175] proxy stdout: Starting to serve on 127.0.0.1:36195
W0908 12:36:00.083665  318744 out.go:285] * Verifying proxy health ...
* Verifying proxy health ...
I0908 12:36:00.135513  318744 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3fc450f9-e82d-441f-ae1d-7227cd961e9f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 12:36:00 GMT]] Body:0x40004eeac0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001e0640 TLS:<nil>}
I0908 12:36:00.135604  318744 retry.go:31] will retry after 62.261µs: Temporary Error: unexpected response code: 503
I0908 12:36:00.142419  318744 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6107dc49-b4d0-4f37-8399-d7cafeaadede] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 12:36:00 GMT]] Body:0x40004eeb40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001e0780 TLS:<nil>}
I0908 12:36:00.142492  318744 retry.go:31] will retry after 82.417µs: Temporary Error: unexpected response code: 503
I0908 12:36:00.158868  318744 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5b5540e4-4d96-4b2d-b230-34f381ae04c7] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 12:36:00 GMT]] Body:0x40004eec00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001e08c0 TLS:<nil>}
I0908 12:36:00.158982  318744 retry.go:31] will retry after 324.394µs: Temporary Error: unexpected response code: 503
I0908 12:36:00.209812  318744 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[24d9d229-0aa1-47b4-bbc6-458598222638] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 12:36:00 GMT]] Body:0x40004eecc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001e0a00 TLS:<nil>}
I0908 12:36:00.209883  318744 retry.go:31] will retry after 465.312µs: Temporary Error: unexpected response code: 503
I0908 12:36:00.229080  318744 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0ebe42aa-14fe-4c22-90b6-657a0a227886] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 12:36:00 GMT]] Body:0x40004eed40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001e0b40 TLS:<nil>}
I0908 12:36:00.229155  318744 retry.go:31] will retry after 617.56µs: Temporary Error: unexpected response code: 503
I0908 12:36:00.247278  318744 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[07406cc8-84a0-46d4-89c7-5ffb6f1232bc] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 12:36:00 GMT]] Body:0x40004eedc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001e0c80 TLS:<nil>}
I0908 12:36:00.247346  318744 retry.go:31] will retry after 897.768µs: Temporary Error: unexpected response code: 503
I0908 12:36:00.289073  318744 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9659e23d-f059-4701-8d66-e612e51c73bb] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 12:36:00 GMT]] Body:0x40004eee40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001e0dc0 TLS:<nil>}
I0908 12:36:00.289139  318744 retry.go:31] will retry after 679.578µs: Temporary Error: unexpected response code: 503
I0908 12:36:00.308538  318744 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2780ea42-d2bb-46e1-9b21-2b0888693e4d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 12:36:00 GMT]] Body:0x40004eef00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001e0f00 TLS:<nil>}
I0908 12:36:00.308613  318744 retry.go:31] will retry after 1.943586ms: Temporary Error: unexpected response code: 503
I0908 12:36:00.315171  318744 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[39f1170a-2974-4dfc-af52-1a63bdb360de] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 12:36:00 GMT]] Body:0x40004eef80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001e1040 TLS:<nil>}
I0908 12:36:00.315235  318744 retry.go:31] will retry after 2.122617ms: Temporary Error: unexpected response code: 503
I0908 12:36:00.347562  318744 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8f8e4c83-09b7-45df-8c2f-867654a04f2a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 12:36:00 GMT]] Body:0x40004ef000 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001e1180 TLS:<nil>}
I0908 12:36:00.347636  318744 retry.go:31] will retry after 2.093445ms: Temporary Error: unexpected response code: 503
I0908 12:36:00.365962  318744 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3fb8539c-78cb-4272-be38-794bbd292fa9] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 12:36:00 GMT]] Body:0x40004ef080 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001e12c0 TLS:<nil>}
I0908 12:36:00.366036  318744 retry.go:31] will retry after 3.43882ms: Temporary Error: unexpected response code: 503
I0908 12:36:00.373517  318744 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[17acf533-545c-4783-a92b-5739288be714] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 12:36:00 GMT]] Body:0x40004ef100 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001e1680 TLS:<nil>}
I0908 12:36:00.373588  318744 retry.go:31] will retry after 12.473955ms: Temporary Error: unexpected response code: 503
I0908 12:36:00.390341  318744 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3c17b1d9-1243-49b3-832f-36f8dbfc23d7] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 12:36:00 GMT]] Body:0x40004ef180 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001e17c0 TLS:<nil>}
I0908 12:36:00.390415  318744 retry.go:31] will retry after 17.830611ms: Temporary Error: unexpected response code: 503
I0908 12:36:00.412393  318744 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f6923f3b-2ab0-407d-a18b-509769027255] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 12:36:00 GMT]] Body:0x40004ef200 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001e1900 TLS:<nil>}
I0908 12:36:00.412462  318744 retry.go:31] will retry after 23.140427ms: Temporary Error: unexpected response code: 503
I0908 12:36:00.440422  318744 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3061307f-2590-496f-a036-dc98e5b7a21a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 12:36:00 GMT]] Body:0x40004ef280 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001e1a40 TLS:<nil>}
I0908 12:36:00.440488  318744 retry.go:31] will retry after 37.559769ms: Temporary Error: unexpected response code: 503
I0908 12:36:00.482401  318744 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[28a42824-f775-438e-996c-aae2f971016a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 12:36:00 GMT]] Body:0x40004ef300 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001e1b80 TLS:<nil>}
I0908 12:36:00.482471  318744 retry.go:31] will retry after 58.118998ms: Temporary Error: unexpected response code: 503
I0908 12:36:00.550084  318744 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ebc372cb-1d74-4eca-af2a-b14c0e69abb7] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 12:36:00 GMT]] Body:0x40004ef380 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001e1cc0 TLS:<nil>}
I0908 12:36:00.550155  318744 retry.go:31] will retry after 63.121924ms: Temporary Error: unexpected response code: 503
I0908 12:36:00.618410  318744 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e70b5e35-907a-4c95-bfdf-dc2d01c11dec] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 12:36:00 GMT]] Body:0x40004ef400 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001e1e00 TLS:<nil>}
I0908 12:36:00.618474  318744 retry.go:31] will retry after 98.600478ms: Temporary Error: unexpected response code: 503
I0908 12:36:00.720824  318744 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[38fbdbe4-d337-4f56-8444-0e7f83551455] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 12:36:00 GMT]] Body:0x40004ef740 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40003f0000 TLS:<nil>}
I0908 12:36:00.720886  318744 retry.go:31] will retry after 206.504398ms: Temporary Error: unexpected response code: 503
I0908 12:36:00.931236  318744 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8dc4c038-7d61-4c2d-ab22-24531dcf79e8] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 12:36:00 GMT]] Body:0x40004ef800 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40003f0140 TLS:<nil>}
I0908 12:36:00.931303  318744 retry.go:31] will retry after 278.348629ms: Temporary Error: unexpected response code: 503
I0908 12:36:01.213928  318744 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1cb0c6f8-62a7-4059-a86b-a342ba92adc1] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 12:36:01 GMT]] Body:0x40004ef880 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40003f0280 TLS:<nil>}
I0908 12:36:01.213995  318744 retry.go:31] will retry after 263.268583ms: Temporary Error: unexpected response code: 503
I0908 12:36:01.481527  318744 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[20b77f5f-4d03-43e1-8c42-b82545ca3432] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 12:36:01 GMT]] Body:0x40004ef980 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40003f03c0 TLS:<nil>}
I0908 12:36:01.481600  318744 retry.go:31] will retry after 347.367696ms: Temporary Error: unexpected response code: 503
I0908 12:36:01.833344  318744 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f46e8aa8-0516-4a0e-b2ef-cdabd53318f4] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 12:36:01 GMT]] Body:0x40004efa00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40003f0500 TLS:<nil>}
I0908 12:36:01.833408  318744 retry.go:31] will retry after 1.097415118s: Temporary Error: unexpected response code: 503
I0908 12:36:02.934645  318744 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4309747f-9b19-4aaf-ac2b-496a35c9b8f2] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 12:36:02 GMT]] Body:0x400089dd80 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40004383c0 TLS:<nil>}
I0908 12:36:02.934704  318744 retry.go:31] will retry after 1.503159086s: Temporary Error: unexpected response code: 503
I0908 12:36:04.441454  318744 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5c0053e3-7ec3-4bd5-989b-904c7a5fd667] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 12:36:04 GMT]] Body:0x400089de00 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000438640 TLS:<nil>}
I0908 12:36:04.441525  318744 retry.go:31] will retry after 1.756107322s: Temporary Error: unexpected response code: 503
I0908 12:36:06.200727  318744 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[93be152c-c300-4d87-8fb2-fff201eae5c8] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 12:36:06 GMT]] Body:0x400089de80 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000438780 TLS:<nil>}
I0908 12:36:06.200791  318744 retry.go:31] will retry after 1.593747047s: Temporary Error: unexpected response code: 503
I0908 12:36:07.798560  318744 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8dd41046-4874-4477-aeab-05d955584918] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 12:36:07 GMT]] Body:0x400089df00 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40004388c0 TLS:<nil>}
I0908 12:36:07.798618  318744 retry.go:31] will retry after 5.025678361s: Temporary Error: unexpected response code: 503
I0908 12:36:12.827481  318744 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[36c86fdf-d5fc-4d58-a94a-4f74ed2609aa] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 12:36:12 GMT]] Body:0x40008d20c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40003f0640 TLS:<nil>}
I0908 12:36:12.827545  318744 retry.go:31] will retry after 5.010852785s: Temporary Error: unexpected response code: 503
I0908 12:36:17.842378  318744 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[aad483cc-bf23-4f4e-a345-92fc5f740321] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 12:36:17 GMT]] Body:0x40008f1300 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40003f0780 TLS:<nil>}
I0908 12:36:17.842490  318744 retry.go:31] will retry after 7.35670499s: Temporary Error: unexpected response code: 503
I0908 12:36:25.202312  318744 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5fac5fec-b417-4ebd-a901-688a86140019] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 12:36:25 GMT]] Body:0x40008f1680 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40003f08c0 TLS:<nil>}
I0908 12:36:25.202391  318744 retry.go:31] will retry after 14.180156748s: Temporary Error: unexpected response code: 503
I0908 12:36:39.385756  318744 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[851bd054-4334-4bcf-b06c-3197968c0ac1] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 12:36:39 GMT]] Body:0x40008d2200 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000438a00 TLS:<nil>}
I0908 12:36:39.385821  318744 retry.go:31] will retry after 23.748774861s: Temporary Error: unexpected response code: 503
I0908 12:37:03.138375  318744 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[674c074c-b812-47f2-a71d-4fd09cc09b47] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 12:37:03 GMT]] Body:0x40008d22c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000438b40 TLS:<nil>}
I0908 12:37:03.138438  318744 retry.go:31] will retry after 19.755358128s: Temporary Error: unexpected response code: 503
I0908 12:37:22.898985  318744 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c351c121-7a93-4f86-aec3-d798e387a196] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 12:37:22 GMT]] Body:0x40008f1800 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40003f0a00 TLS:<nil>}
I0908 12:37:22.899048  318744 retry.go:31] will retry after 56.605169321s: Temporary Error: unexpected response code: 503
I0908 12:38:19.508740  318744 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[21fd5687-f757-4e7a-bea7-074477821648] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 12:38:19 GMT]] Body:0x40008d2080 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40003f0b40 TLS:<nil>}
I0908 12:38:19.508805  318744 retry.go:31] will retry after 45.817120443s: Temporary Error: unexpected response code: 503
I0908 12:39:05.330911  318744 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[88fb75ee-b83c-49e9-95e4-0e9d96a92932] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 12:39:05 GMT]] Body:0x40008f1340 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000438140 TLS:<nil>}
I0908 12:39:05.330973  318744 retry.go:31] will retry after 59.112236474s: Temporary Error: unexpected response code: 503
I0908 12:40:04.446293  318744 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ec4e3a88-2e9a-4b3e-85c9-d0ef8cc4aef0] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 12:40:04 GMT]] Body:0x40008f1340 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40003f0c80 TLS:<nil>}
I0908 12:40:04.446362  318744 retry.go:31] will retry after 31.371252517s: Temporary Error: unexpected response code: 503
I0908 12:40:35.822202  318744 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[40ee2a6b-ba22-4a31-9790-542d269df80e] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 12:40:35 GMT]] Body:0x40008d21c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000438000 TLS:<nil>}
I0908 12:40:35.822266  318744 retry.go:31] will retry after 35.73055928s: Temporary Error: unexpected response code: 503
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-140475
helpers_test.go:243: (dbg) docker inspect functional-140475:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f779030ae61f3535d0f5115329c87b778b458ec5edcb503f2cd898753ed14341",
	        "Created": "2025-09-08T12:22:33.259116131Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 300594,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-08T12:22:33.335511126Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1a6e5b410fd9226cf2434621073598c7c01bccc994a53260ab0a0d834a0f1815",
	        "ResolvConfPath": "/var/lib/docker/containers/f779030ae61f3535d0f5115329c87b778b458ec5edcb503f2cd898753ed14341/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f779030ae61f3535d0f5115329c87b778b458ec5edcb503f2cd898753ed14341/hostname",
	        "HostsPath": "/var/lib/docker/containers/f779030ae61f3535d0f5115329c87b778b458ec5edcb503f2cd898753ed14341/hosts",
	        "LogPath": "/var/lib/docker/containers/f779030ae61f3535d0f5115329c87b778b458ec5edcb503f2cd898753ed14341/f779030ae61f3535d0f5115329c87b778b458ec5edcb503f2cd898753ed14341-json.log",
	        "Name": "/functional-140475",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-140475:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-140475",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f779030ae61f3535d0f5115329c87b778b458ec5edcb503f2cd898753ed14341",
	                "LowerDir": "/var/lib/docker/overlay2/d312c914cb3c70debf4b39ba5376f977a50ea3960281d7f5c74cdcd5b6aa7804-init/diff:/var/lib/docker/overlay2/4e9e34582c8fac27b8acdffb5ccaf9d8b30c2dae25a1b3b2b79fa116bc7d16cb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d312c914cb3c70debf4b39ba5376f977a50ea3960281d7f5c74cdcd5b6aa7804/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d312c914cb3c70debf4b39ba5376f977a50ea3960281d7f5c74cdcd5b6aa7804/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d312c914cb3c70debf4b39ba5376f977a50ea3960281d7f5c74cdcd5b6aa7804/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-140475",
	                "Source": "/var/lib/docker/volumes/functional-140475/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-140475",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-140475",
	                "name.minikube.sigs.k8s.io": "functional-140475",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3ce5a484316b053698273bb74a7bfdcf5e2405e0d4a8e758d9e2edbdb00445ff",
	            "SandboxKey": "/var/run/docker/netns/3ce5a484316b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33143"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33144"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33147"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33145"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33146"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-140475": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "de:bd:9a:64:d3:4a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c192c78e034e0a71a0e148767e9b0ec7ae14d2f5e09e1cfa298441ea22bbe0e5",
	                    "EndpointID": "b362f57131db43ce06461506a6aa968ca551222e3cb2b1e2a1609968c677929a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-140475",
	                        "f779030ae61f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-140475 -n functional-140475
helpers_test.go:252: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-140475 logs -n 25: (1.142365535s)
helpers_test.go:260: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                            ARGS                                                                             │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-140475 image save kicbase/echo-server:functional-140475 /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:36 UTC │ 08 Sep 25 12:36 UTC │
	│ image          │ functional-140475 image rm kicbase/echo-server:functional-140475 --alsologtostderr                                                                          │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:36 UTC │ 08 Sep 25 12:36 UTC │
	│ image          │ functional-140475 image ls                                                                                                                                  │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:36 UTC │ 08 Sep 25 12:36 UTC │
	│ image          │ functional-140475 image load /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:36 UTC │ 08 Sep 25 12:36 UTC │
	│ image          │ functional-140475 image ls                                                                                                                                  │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:36 UTC │ 08 Sep 25 12:36 UTC │
	│ image          │ functional-140475 image save --daemon kicbase/echo-server:functional-140475 --alsologtostderr                                                               │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:36 UTC │ 08 Sep 25 12:36 UTC │
	│ docker-env     │ functional-140475 docker-env                                                                                                                                │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:36 UTC │ 08 Sep 25 12:36 UTC │
	│ docker-env     │ functional-140475 docker-env                                                                                                                                │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:36 UTC │ 08 Sep 25 12:36 UTC │
	│ ssh            │ functional-140475 ssh sudo cat /etc/test/nested/copy/274796/hosts                                                                                           │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:36 UTC │ 08 Sep 25 12:36 UTC │
	│ ssh            │ functional-140475 ssh sudo cat /etc/ssl/certs/274796.pem                                                                                                    │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:36 UTC │ 08 Sep 25 12:36 UTC │
	│ ssh            │ functional-140475 ssh sudo cat /usr/share/ca-certificates/274796.pem                                                                                        │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:36 UTC │ 08 Sep 25 12:36 UTC │
	│ ssh            │ functional-140475 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                    │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:36 UTC │ 08 Sep 25 12:36 UTC │
	│ ssh            │ functional-140475 ssh sudo cat /etc/ssl/certs/2747962.pem                                                                                                   │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:36 UTC │ 08 Sep 25 12:36 UTC │
	│ ssh            │ functional-140475 ssh sudo cat /usr/share/ca-certificates/2747962.pem                                                                                       │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:36 UTC │ 08 Sep 25 12:36 UTC │
	│ ssh            │ functional-140475 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                    │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:36 UTC │ 08 Sep 25 12:36 UTC │
	│ image          │ functional-140475 image ls --format short --alsologtostderr                                                                                                 │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:36 UTC │ 08 Sep 25 12:36 UTC │
	│ update-context │ functional-140475 update-context --alsologtostderr -v=2                                                                                                     │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:36 UTC │ 08 Sep 25 12:36 UTC │
	│ ssh            │ functional-140475 ssh pgrep buildkitd                                                                                                                       │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:36 UTC │                     │
	│ image          │ functional-140475 image build -t localhost/my-image:functional-140475 testdata/build --alsologtostderr                                                      │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:36 UTC │ 08 Sep 25 12:36 UTC │
	│ image          │ functional-140475 image ls                                                                                                                                  │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:36 UTC │ 08 Sep 25 12:36 UTC │
	│ image          │ functional-140475 image ls --format yaml --alsologtostderr                                                                                                  │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:36 UTC │ 08 Sep 25 12:36 UTC │
	│ image          │ functional-140475 image ls --format json --alsologtostderr                                                                                                  │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:36 UTC │ 08 Sep 25 12:36 UTC │
	│ image          │ functional-140475 image ls --format table --alsologtostderr                                                                                                 │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:36 UTC │ 08 Sep 25 12:36 UTC │
	│ update-context │ functional-140475 update-context --alsologtostderr -v=2                                                                                                     │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:36 UTC │ 08 Sep 25 12:36 UTC │
	│ update-context │ functional-140475 update-context --alsologtostderr -v=2                                                                                                     │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:36 UTC │ 08 Sep 25 12:36 UTC │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 12:35:57
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 12:35:57.580359  318643 out.go:360] Setting OutFile to fd 1 ...
	I0908 12:35:57.580564  318643 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:35:57.580570  318643 out.go:374] Setting ErrFile to fd 2...
	I0908 12:35:57.580576  318643 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:35:57.581023  318643 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-272936/.minikube/bin
	I0908 12:35:57.581649  318643 out.go:368] Setting JSON to false
	I0908 12:35:57.583444  318643 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8308,"bootTime":1757326650,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0908 12:35:57.583539  318643 start.go:140] virtualization:  
	I0908 12:35:57.586955  318643 out.go:179] * [functional-140475] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	I0908 12:35:57.592776  318643 out.go:179]   - MINIKUBE_LOCATION=21508
	I0908 12:35:57.593090  318643 notify.go:220] Checking for updates...
	I0908 12:35:57.605487  318643 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 12:35:57.608633  318643 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21508-272936/kubeconfig
	I0908 12:35:57.612109  318643 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-272936/.minikube
	I0908 12:35:57.615824  318643 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0908 12:35:57.621034  318643 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 12:35:57.624547  318643 config.go:182] Loaded profile config "functional-140475": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 12:35:57.625659  318643 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 12:35:57.662370  318643 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 12:35:57.662493  318643 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 12:35:57.730048  318643 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-09-08 12:35:57.719428123 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 12:35:57.730160  318643 docker.go:318] overlay module found
	I0908 12:35:57.734094  318643 out.go:179] * Using the docker driver based on existing profile
	I0908 12:35:57.737112  318643 start.go:304] selected driver: docker
	I0908 12:35:57.737131  318643 start.go:918] validating driver "docker" against &{Name:functional-140475 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-140475 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 12:35:57.737212  318643 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 12:35:57.737326  318643 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 12:35:57.832552  318643 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-09-08 12:35:57.821197016 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 12:35:57.832914  318643 cni.go:84] Creating CNI manager for ""
	I0908 12:35:57.832977  318643 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0908 12:35:57.833023  318643 start.go:348] cluster config:
	{Name:functional-140475 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-140475 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 12:35:57.836035  318643 out.go:179] * dry-run validation complete!
	
	
	==> Docker <==
	Sep 08 12:36:01 functional-140475 cri-dockerd[7648]: time="2025-09-08T12:36:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ddcf15aa5dddd13ba972b31c1a4235c77114fa9b3a48bc6077b0ca2d843ec8e2/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Sep 08 12:36:01 functional-140475 dockerd[6902]: time="2025-09-08T12:36:01.879829808Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 08 12:36:01 functional-140475 dockerd[6902]: time="2025-09-08T12:36:01.977312118Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 08 12:36:02 functional-140475 dockerd[6902]: time="2025-09-08T12:36:02.027115497Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 08 12:36:02 functional-140475 dockerd[6902]: time="2025-09-08T12:36:02.117051098Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 08 12:36:15 functional-140475 dockerd[6902]: time="2025-09-08T12:36:15.288216114Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 08 12:36:15 functional-140475 dockerd[6902]: time="2025-09-08T12:36:15.375911929Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 08 12:36:18 functional-140475 dockerd[6902]: time="2025-09-08T12:36:18.281107434Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 08 12:36:18 functional-140475 dockerd[6902]: time="2025-09-08T12:36:18.372202815Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 08 12:36:29 functional-140475 dockerd[6902]: time="2025-09-08T12:36:29.459298453Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 08 12:36:40 functional-140475 dockerd[6902]: time="2025-09-08T12:36:40.290387165Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 08 12:36:40 functional-140475 dockerd[6902]: time="2025-09-08T12:36:40.377620711Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 08 12:36:43 functional-140475 dockerd[6902]: time="2025-09-08T12:36:43.280602475Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 08 12:36:43 functional-140475 dockerd[6902]: time="2025-09-08T12:36:43.367565765Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 08 12:36:48 functional-140475 cri-dockerd[7648]: time="2025-09-08T12:36:48Z" level=info msg="Stop pulling image kicbase/echo-server:latest: Status: Image is up to date for kicbase/echo-server:latest"
	Sep 08 12:37:32 functional-140475 dockerd[6902]: time="2025-09-08T12:37:32.294158774Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 08 12:37:32 functional-140475 dockerd[6902]: time="2025-09-08T12:37:32.473499316Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 08 12:37:32 functional-140475 cri-dockerd[7648]: time="2025-09-08T12:37:32Z" level=info msg="Stop pulling image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: Pulling from kubernetesui/metrics-scraper"
	Sep 08 12:37:34 functional-140475 dockerd[6902]: time="2025-09-08T12:37:34.286429324Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 08 12:37:34 functional-140475 dockerd[6902]: time="2025-09-08T12:37:34.371865517Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 08 12:38:56 functional-140475 dockerd[6902]: time="2025-09-08T12:38:56.288414334Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 08 12:38:56 functional-140475 dockerd[6902]: time="2025-09-08T12:38:56.460764238Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 08 12:38:56 functional-140475 cri-dockerd[7648]: time="2025-09-08T12:38:56Z" level=info msg="Stop pulling image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: Pulling from kubernetesui/metrics-scraper"
	Sep 08 12:39:01 functional-140475 dockerd[6902]: time="2025-09-08T12:39:01.278904915Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 08 12:39:01 functional-140475 dockerd[6902]: time="2025-09-08T12:39:01.358033408Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	27f8085554fc1       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6           4 minutes ago       Running             echo-server               0                   83e116db12522       hello-node-connect-7d85dfc575-t5bmg
	70117b128644e       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   5 minutes ago       Exited              mount-munger              0                   e1729aa9858a3       busybox-mount
	6049e4afeaa34       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6           5 minutes ago       Running             echo-server               0                   3b73d8658387f       hello-node-75c85bcc94-x22bh
	a0f0bf25d6321       nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8                         15 minutes ago      Running             nginx                     0                   f7c16945d9528       nginx-svc
	28f6049f43b21       138784d87c9c5                                                                                         15 minutes ago      Running             coredns                   2                   baa450b79b795       coredns-66bc5c9577-p79xg
	ae4f0a6634637       6fc32d66c1411                                                                                         15 minutes ago      Running             kube-proxy                3                   4aa0c34c1b998       kube-proxy-mtw87
	3e56d761c8413       ba04bb24b9575                                                                                         15 minutes ago      Running             storage-provisioner       4                   5df3eb436cb71       storage-provisioner
	8e057b7f41de7       a25f5ef9c34c3                                                                                         15 minutes ago      Running             kube-scheduler            3                   02257bbd60ea0       kube-scheduler-functional-140475
	264ed758e3516       d291939e99406                                                                                         15 minutes ago      Running             kube-apiserver            0                   a12878798ff69       kube-apiserver-functional-140475
	b5a5bff40e315       996be7e86d9b3                                                                                         15 minutes ago      Running             kube-controller-manager   3                   46117db363397       kube-controller-manager-functional-140475
	ca1f2eb2a56e6       a1894772a478e                                                                                         15 minutes ago      Running             etcd                      2                   7876da18c12d9       etcd-functional-140475
	207c0c3df856b       996be7e86d9b3                                                                                         15 minutes ago      Created             kube-controller-manager   2                   ddb7ba696cfb7       kube-controller-manager-functional-140475
	152b108a85c33       a25f5ef9c34c3                                                                                         15 minutes ago      Created             kube-scheduler            2                   0101077a99284       kube-scheduler-functional-140475
	4ce992834f477       6fc32d66c1411                                                                                         15 minutes ago      Exited              kube-proxy                2                   e1c898d52b181       kube-proxy-mtw87
	6bd6d37f8ca18       ba04bb24b9575                                                                                         16 minutes ago      Exited              storage-provisioner       3                   5d4262a965e8d       storage-provisioner
	e96a79d425559       138784d87c9c5                                                                                         16 minutes ago      Exited              coredns                   1                   624d0c2700e94       coredns-66bc5c9577-p79xg
	02ea4d507b878       a1894772a478e                                                                                         16 minutes ago      Exited              etcd                      1                   f043527a68f42       etcd-functional-140475
	
	
	==> coredns [28f6049f43b2] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53675 - 53593 "HINFO IN 2100446065665056732.4545956877188492551. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.030972055s
	
	
	==> coredns [e96a79d42555] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50588 - 59981 "HINFO IN 3063204499314061978.8433951396937313554. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013894801s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-140475
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-140475
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3f6dd380c737091fd766d425b85ffa6c4f72b9ba
	                    minikube.k8s.io/name=functional-140475
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_08T12_23_00_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Sep 2025 12:22:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-140475
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Sep 2025 12:40:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Sep 2025 12:36:53 +0000   Mon, 08 Sep 2025 12:22:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Sep 2025 12:36:53 +0000   Mon, 08 Sep 2025 12:22:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Sep 2025 12:36:53 +0000   Mon, 08 Sep 2025 12:22:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Sep 2025 12:36:53 +0000   Mon, 08 Sep 2025 12:22:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-140475
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 28cc2cd20fe74a02bfe1586146117dde
	  System UUID:                a03639dc-39eb-4af1-8eff-ffc8a710a78a
	  Boot ID:                    3b69f852-7505-47f7-82de-581d66319e23
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-x22bh                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  default                     hello-node-connect-7d85dfc575-t5bmg           0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 coredns-66bc5c9577-p79xg                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     17m
	  kube-system                 etcd-functional-140475                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         17m
	  kube-system                 kube-apiserver-functional-140475              250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-functional-140475     200m (10%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-proxy-mtw87                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-scheduler-functional-140475              100m (5%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-4kltn    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-zjscm         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (2%)  170Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 17m                kube-proxy       
	  Normal   Starting                 15m                kube-proxy       
	  Normal   Starting                 16m                kube-proxy       
	  Normal   Starting                 18m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 18m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node functional-140475 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node functional-140475 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     18m (x7 over 18m)  kubelet          Node functional-140475 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  17m                kubelet          Node functional-140475 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 17m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  17m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    17m                kubelet          Node functional-140475 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     17m                kubelet          Node functional-140475 status is now: NodeHasSufficientPID
	  Normal   Starting                 17m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           17m                node-controller  Node functional-140475 event: Registered Node functional-140475 in Controller
	  Warning  ContainerGCFailed        16m                kubelet          rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	  Normal   RegisteredNode           16m                node-controller  Node functional-140475 event: Registered Node functional-140475 in Controller
	  Normal   Starting                 15m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 15m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node functional-140475 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node functional-140475 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node functional-140475 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           15m                node-controller  Node functional-140475 event: Registered Node functional-140475 in Controller
	
	
	==> dmesg <==
	[Sep 8 10:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014150] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.486895] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033827] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.725700] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.488700] kauditd_printk_skb: 36 callbacks suppressed
	[Sep 8 10:40] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Sep 8 11:30] hrtimer: interrupt took 33050655 ns
	[Sep 8 12:15] kauditd_printk_skb: 8 callbacks suppressed
	[Sep 8 12:35] FS-Cache: Duplicate cookie detected
	[  +0.000684] FS-Cache: O-cookie c=00000025 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000907] FS-Cache: O-cookie d=00000000f75621f8{9P.session} n=000000002e0501ee
	[  +0.001029] FS-Cache: O-key=[10] '34323936393639353436'
	[  +0.000727] FS-Cache: N-cookie c=00000026 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000883] FS-Cache: N-cookie d=00000000f75621f8{9P.session} n=00000000ccfa13d2
	[  +0.001067] FS-Cache: N-key=[10] '34323936393639353436'
	
	
	==> etcd [02ea4d507b87] <==
	{"level":"warn","ts":"2025-09-08T12:24:16.639838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:24:16.649417Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:24:16.688735Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:24:16.731763Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:24:16.765271Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:24:16.778536Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:24:16.910398Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39510","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-08T12:24:57.890978Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-08T12:24:57.891064Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-140475","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-09-08T12:24:57.891169Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-08T12:24:57.891444Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-08T12:25:04.896880Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-08T12:25:04.896943Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"warn","ts":"2025-09-08T12:25:04.897086Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-08T12:25:04.897172Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-08T12:25:04.897205Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-08T12:25:04.897262Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-09-08T12:25:04.897312Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-09-08T12:25:04.899523Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-08T12:25:04.899576Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-08T12:25:04.899589Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-08T12:25:04.902221Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-09-08T12:25:04.902306Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-08T12:25:04.902428Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-09-08T12:25:04.902507Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-140475","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [ca1f2eb2a56e] <==
	{"level":"warn","ts":"2025-09-08T12:25:17.711751Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:25:17.736991Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:25:17.763646Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:25:17.841197Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:25:17.852797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:25:17.876013Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:25:17.907257Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:25:17.950331Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:25:17.980848Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:25:18.017529Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:25:18.061396Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:25:18.088202Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:25:18.122939Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:25:18.166770Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:25:18.220953Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:25:18.244197Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:25:18.257729Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:25:18.273735Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:25:18.337140Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42186","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-08T12:35:16.771183Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1157}
	{"level":"info","ts":"2025-09-08T12:35:16.795010Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1157,"took":"23.469328ms","hash":1845616592,"current-db-size-bytes":3301376,"current-db-size":"3.3 MB","current-db-size-in-use-bytes":1523712,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2025-09-08T12:35:16.795073Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1845616592,"revision":1157,"compact-revision":-1}
	{"level":"info","ts":"2025-09-08T12:40:16.775355Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1461}
	{"level":"info","ts":"2025-09-08T12:40:16.778328Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1461,"took":"2.429889ms","hash":923434856,"current-db-size-bytes":3301376,"current-db-size":"3.3 MB","current-db-size-in-use-bytes":2330624,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2025-09-08T12:40:16.778383Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":923434856,"revision":1461,"compact-revision":1157}
	
	
	==> kernel <==
	 12:40:59 up  2:23,  0 users,  load average: 0.15, 0.45, 0.71
	Linux functional-140475 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [264ed758e351] <==
	I0908 12:28:52.414335       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 12:29:25.650405       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 12:29:52.823851       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.98.39.142"}
	I0908 12:30:14.183727       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 12:30:47.334511       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 12:31:43.367778       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 12:32:05.544623       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 12:32:45.391793       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 12:33:29.726018       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 12:34:06.733632       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 12:34:30.956819       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 12:35:19.387854       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0908 12:35:24.827665       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 12:35:52.296396       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 12:35:59.459035       1 controller.go:667] quota admission added evaluator for: namespaces
	I0908 12:35:59.852223       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.157.142"}
	I0908 12:35:59.897954       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.149.142"}
	I0908 12:36:33.153896       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 12:37:03.840767       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 12:37:45.854817       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 12:38:24.845669       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 12:38:49.479578       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 12:39:32.292531       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 12:40:06.437273       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 12:40:34.240173       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [207c0c3df856] <==
	
	
	==> kube-controller-manager [b5a5bff40e31] <==
	I0908 12:25:22.685677       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0908 12:25:22.686301       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0908 12:25:22.691882       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I0908 12:25:22.696116       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0908 12:25:22.698431       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0908 12:25:22.711012       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0908 12:25:22.714314       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0908 12:25:22.723484       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0908 12:25:22.729486       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0908 12:25:22.729518       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0908 12:25:22.729912       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0908 12:25:22.730035       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0908 12:25:22.729535       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0908 12:25:22.729765       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0908 12:25:22.731485       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0908 12:25:22.731720       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0908 12:25:22.734707       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0908 12:25:22.736252       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-140475"
	I0908 12:25:22.737513       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	E0908 12:35:59.606734       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0908 12:35:59.617788       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0908 12:35:59.630008       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0908 12:35:59.637787       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0908 12:35:59.642582       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0908 12:35:59.652929       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [4ce992834f47] <==
	I0908 12:25:10.488602       1 server_linux.go:53] "Using iptables proxy"
	I0908 12:25:10.606414       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E0908 12:25:10.607244       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-140475&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	
	
	==> kube-proxy [ae4f0a663463] <==
	I0908 12:25:20.785069       1 server_linux.go:53] "Using iptables proxy"
	I0908 12:25:20.921442       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0908 12:25:21.025213       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0908 12:25:21.025245       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0908 12:25:21.025320       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0908 12:25:21.179249       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0908 12:25:21.182406       1 server_linux.go:132] "Using iptables Proxier"
	I0908 12:25:21.227302       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0908 12:25:21.227578       1 server.go:527] "Version info" version="v1.34.0"
	I0908 12:25:21.227593       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 12:25:21.229207       1 config.go:200] "Starting service config controller"
	I0908 12:25:21.229217       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0908 12:25:21.229243       1 config.go:106] "Starting endpoint slice config controller"
	I0908 12:25:21.229247       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0908 12:25:21.229258       1 config.go:403] "Starting serviceCIDR config controller"
	I0908 12:25:21.229262       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0908 12:25:21.237957       1 config.go:309] "Starting node config controller"
	I0908 12:25:21.237979       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0908 12:25:21.237987       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0908 12:25:21.330477       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0908 12:25:21.340787       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0908 12:25:21.340836       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [152b108a85c3] <==
	
	
	==> kube-scheduler [8e057b7f41de] <==
	I0908 12:25:18.036734       1 serving.go:386] Generated self-signed cert in-memory
	I0908 12:25:19.475887       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0908 12:25:19.475921       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 12:25:19.484805       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0908 12:25:19.485031       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I0908 12:25:19.485180       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 12:25:19.485277       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 12:25:19.485375       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0908 12:25:19.485465       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0908 12:25:19.486343       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0908 12:25:19.487832       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0908 12:25:19.585623       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0908 12:25:19.585742       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 12:25:19.585630       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Sep 08 12:39:12 functional-140475 kubelet[8797]: E0908 12:39:12.239385    8797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-zjscm" podUID="6e670da3-937f-40cb-ac27-3943a8ec0fef"
	Sep 08 12:39:20 functional-140475 kubelet[8797]: E0908 12:39:20.237173    8797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="ea595f80-e42a-47b4-bbbb-a8b2b4a4b3b1"
	Sep 08 12:39:20 functional-140475 kubelet[8797]: E0908 12:39:20.242262    8797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-4kltn" podUID="6e537e2b-3a5d-4484-99aa-7af460bfbce7"
	Sep 08 12:39:24 functional-140475 kubelet[8797]: E0908 12:39:24.247169    8797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-zjscm" podUID="6e670da3-937f-40cb-ac27-3943a8ec0fef"
	Sep 08 12:39:32 functional-140475 kubelet[8797]: E0908 12:39:32.237228    8797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="ea595f80-e42a-47b4-bbbb-a8b2b4a4b3b1"
	Sep 08 12:39:32 functional-140475 kubelet[8797]: E0908 12:39:32.241205    8797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-4kltn" podUID="6e537e2b-3a5d-4484-99aa-7af460bfbce7"
	Sep 08 12:39:38 functional-140475 kubelet[8797]: E0908 12:39:38.241532    8797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-zjscm" podUID="6e670da3-937f-40cb-ac27-3943a8ec0fef"
	Sep 08 12:39:45 functional-140475 kubelet[8797]: E0908 12:39:45.244857    8797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-4kltn" podUID="6e537e2b-3a5d-4484-99aa-7af460bfbce7"
	Sep 08 12:39:46 functional-140475 kubelet[8797]: E0908 12:39:46.237271    8797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="ea595f80-e42a-47b4-bbbb-a8b2b4a4b3b1"
	Sep 08 12:39:52 functional-140475 kubelet[8797]: E0908 12:39:52.239804    8797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-zjscm" podUID="6e670da3-937f-40cb-ac27-3943a8ec0fef"
	Sep 08 12:39:58 functional-140475 kubelet[8797]: E0908 12:39:58.242392    8797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-4kltn" podUID="6e537e2b-3a5d-4484-99aa-7af460bfbce7"
	Sep 08 12:40:01 functional-140475 kubelet[8797]: E0908 12:40:01.237213    8797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="ea595f80-e42a-47b4-bbbb-a8b2b4a4b3b1"
	Sep 08 12:40:05 functional-140475 kubelet[8797]: E0908 12:40:05.239461    8797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-zjscm" podUID="6e670da3-937f-40cb-ac27-3943a8ec0fef"
	Sep 08 12:40:12 functional-140475 kubelet[8797]: E0908 12:40:12.239250    8797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-4kltn" podUID="6e537e2b-3a5d-4484-99aa-7af460bfbce7"
	Sep 08 12:40:15 functional-140475 kubelet[8797]: E0908 12:40:15.237319    8797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="ea595f80-e42a-47b4-bbbb-a8b2b4a4b3b1"
	Sep 08 12:40:20 functional-140475 kubelet[8797]: E0908 12:40:20.241161    8797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-zjscm" podUID="6e670da3-937f-40cb-ac27-3943a8ec0fef"
	Sep 08 12:40:25 functional-140475 kubelet[8797]: E0908 12:40:25.239314    8797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-4kltn" podUID="6e537e2b-3a5d-4484-99aa-7af460bfbce7"
	Sep 08 12:40:26 functional-140475 kubelet[8797]: E0908 12:40:26.237273    8797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="ea595f80-e42a-47b4-bbbb-a8b2b4a4b3b1"
	Sep 08 12:40:31 functional-140475 kubelet[8797]: E0908 12:40:31.238843    8797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-zjscm" podUID="6e670da3-937f-40cb-ac27-3943a8ec0fef"
	Sep 08 12:40:38 functional-140475 kubelet[8797]: E0908 12:40:38.238014    8797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="ea595f80-e42a-47b4-bbbb-a8b2b4a4b3b1"
	Sep 08 12:40:40 functional-140475 kubelet[8797]: E0908 12:40:40.240390    8797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-4kltn" podUID="6e537e2b-3a5d-4484-99aa-7af460bfbce7"
	Sep 08 12:40:42 functional-140475 kubelet[8797]: E0908 12:40:42.241262    8797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-zjscm" podUID="6e670da3-937f-40cb-ac27-3943a8ec0fef"
	Sep 08 12:40:50 functional-140475 kubelet[8797]: E0908 12:40:50.247155    8797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="ea595f80-e42a-47b4-bbbb-a8b2b4a4b3b1"
	Sep 08 12:40:53 functional-140475 kubelet[8797]: E0908 12:40:53.238780    8797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-4kltn" podUID="6e537e2b-3a5d-4484-99aa-7af460bfbce7"
	Sep 08 12:40:55 functional-140475 kubelet[8797]: E0908 12:40:55.239758    8797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-zjscm" podUID="6e670da3-937f-40cb-ac27-3943a8ec0fef"
	
	
	==> storage-provisioner [3e56d761c841] <==
	W0908 12:40:34.584336       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:40:36.587515       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:40:36.591635       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:40:38.594616       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:40:38.599088       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:40:40.601793       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:40:40.606810       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:40:42.609330       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:40:42.613402       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:40:44.615964       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:40:44.621059       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:40:46.624192       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:40:46.628475       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:40:48.631106       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:40:48.636224       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:40:50.639229       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:40:50.643297       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:40:52.646679       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:40:52.652197       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:40:54.654871       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:40:54.658656       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:40:56.663449       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:40:56.668731       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:40:58.671786       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:40:58.677615       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [6bd6d37f8ca1] <==
	I0908 12:24:39.805610       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0908 12:24:39.817707       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0908 12:24:39.818001       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0908 12:24:39.827051       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:24:43.281855       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:24:47.542472       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:24:51.141255       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:24:54.195301       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:24:57.217075       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:24:57.222388       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0908 12:24:57.222556       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0908 12:24:57.222806       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-140475_be3659dd-d2e5-49cd-8419-febf302bbd52!
	I0908 12:24:57.224364       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6f4e765b-be4a-4c1c-98b1-2642ed77f8a2", APIVersion:"v1", ResourceVersion:"613", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-140475_be3659dd-d2e5-49cd-8419-febf302bbd52 became leader
	W0908 12:24:57.228147       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:24:57.233779       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0908 12:24:57.323828       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-140475_be3659dd-d2e5-49cd-8419-febf302bbd52!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-140475 -n functional-140475
helpers_test.go:269: (dbg) Run:  kubectl --context functional-140475 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount sp-pod dashboard-metrics-scraper-77bf4d6c4c-4kltn kubernetes-dashboard-855c9754f9-zjscm
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-140475 describe pod busybox-mount sp-pod dashboard-metrics-scraper-77bf4d6c4c-4kltn kubernetes-dashboard-855c9754f9-zjscm
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-140475 describe pod busybox-mount sp-pod dashboard-metrics-scraper-77bf4d6c4c-4kltn kubernetes-dashboard-855c9754f9-zjscm: exit status 1 (199.921752ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-140475/192.168.49.2
	Start Time:       Mon, 08 Sep 2025 12:35:49 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.12
	IPs:
	  IP:  10.244.0.12
	Containers:
	  mount-munger:
	    Container ID:  docker://70117b128644e2e4767f5fbbfdc02ceeecd480efe2fdcb53a147bb5f55a75ea6
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 08 Sep 2025 12:35:52 +0000
	      Finished:     Mon, 08 Sep 2025 12:35:52 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-66vdl (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-66vdl:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  5m10s  default-scheduler  Successfully assigned default/busybox-mount to functional-140475
	  Normal  Pulling    5m10s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     5m8s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.051s (2.051s including waiting). Image size: 3547125 bytes.
	  Normal  Created    5m8s   kubelet            Created container: mount-munger
	  Normal  Started    5m8s   kubelet            Started container mount-munger
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-140475/192.168.49.2
	Start Time:       Mon, 08 Sep 2025 12:25:49 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-h6sml (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-h6sml:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  15m                   default-scheduler  Successfully assigned default/sp-pod to functional-140475
	  Warning  Failed     13m (x3 over 14m)     kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    12m (x5 over 15m)     kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     12m (x2 over 15m)     kubelet            Failed to pull image "docker.io/nginx": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     12m (x5 over 15m)     kubelet            Error: ErrImagePull
	  Normal   BackOff    4m56s (x43 over 15m)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     4m56s (x43 over 15m)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-4kltn" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-zjscm" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-140475 describe pod busybox-mount sp-pod dashboard-metrics-scraper-77bf4d6c4c-4kltn kubernetes-dashboard-855c9754f9-zjscm: exit status 1
--- FAIL: TestFunctional/parallel/DashboardCmd (302.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-140475 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-140475 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-t5bmg" [3d104d49-ee46-419f-87a7-43b430053f2b] Pending
helpers_test.go:352: "hello-node-connect-7d85dfc575-t5bmg" [3d104d49-ee46-419f-87a7-43b430053f2b] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E0908 12:26:22.865449  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/addons-812729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:28:38.997923  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/addons-812729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:29:06.706781  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/addons-812729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-140475 -n functional-140475
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-09-08 12:35:52.483483978 +0000 UTC m=+1195.803559163
functional_test.go:1645: (dbg) Run:  kubectl --context functional-140475 describe po hello-node-connect-7d85dfc575-t5bmg -n default
functional_test.go:1645: (dbg) kubectl --context functional-140475 describe po hello-node-connect-7d85dfc575-t5bmg -n default:
Name:             hello-node-connect-7d85dfc575-t5bmg
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-140475/192.168.49.2
Start Time:       Mon, 08 Sep 2025 12:25:51 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.10
IPs:
IP:           10.244.0.10
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sks76 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-sks76:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-t5bmg to functional-140475
Normal   Pulling    7m3s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m3s (x5 over 10m)      kubelet            Failed to pull image "kicbase/echo-server": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     7m3s (x5 over 10m)      kubelet            Error: ErrImagePull
Normal   BackOff    4m50s (x22 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m50s (x22 over 9m59s)  kubelet            Error: ImagePullBackOff
functional_test.go:1645: (dbg) Run:  kubectl --context functional-140475 logs hello-node-connect-7d85dfc575-t5bmg -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-140475 logs hello-node-connect-7d85dfc575-t5bmg -n default: exit status 1 (94.718771ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-t5bmg" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-140475 logs hello-node-connect-7d85dfc575-t5bmg -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-140475 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-t5bmg
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-140475/192.168.49.2
Start Time:       Mon, 08 Sep 2025 12:25:51 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.10
IPs:
IP:           10.244.0.10
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sks76 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-sks76:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-t5bmg to functional-140475
Normal   Pulling    7m3s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m3s (x5 over 10m)      kubelet            Failed to pull image "kicbase/echo-server": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     7m3s (x5 over 10m)      kubelet            Error: ErrImagePull
Normal   BackOff    4m50s (x22 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m50s (x22 over 9m59s)  kubelet            Error: ImagePullBackOff

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-140475 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-140475 logs -l app=hello-node-connect: exit status 1 (107.249433ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-t5bmg" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-140475 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-140475 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.99.5.255
IPs:                      10.99.5.255
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30151/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-140475
helpers_test.go:243: (dbg) docker inspect functional-140475:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f779030ae61f3535d0f5115329c87b778b458ec5edcb503f2cd898753ed14341",
	        "Created": "2025-09-08T12:22:33.259116131Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 300594,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-08T12:22:33.335511126Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1a6e5b410fd9226cf2434621073598c7c01bccc994a53260ab0a0d834a0f1815",
	        "ResolvConfPath": "/var/lib/docker/containers/f779030ae61f3535d0f5115329c87b778b458ec5edcb503f2cd898753ed14341/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f779030ae61f3535d0f5115329c87b778b458ec5edcb503f2cd898753ed14341/hostname",
	        "HostsPath": "/var/lib/docker/containers/f779030ae61f3535d0f5115329c87b778b458ec5edcb503f2cd898753ed14341/hosts",
	        "LogPath": "/var/lib/docker/containers/f779030ae61f3535d0f5115329c87b778b458ec5edcb503f2cd898753ed14341/f779030ae61f3535d0f5115329c87b778b458ec5edcb503f2cd898753ed14341-json.log",
	        "Name": "/functional-140475",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-140475:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-140475",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f779030ae61f3535d0f5115329c87b778b458ec5edcb503f2cd898753ed14341",
	                "LowerDir": "/var/lib/docker/overlay2/d312c914cb3c70debf4b39ba5376f977a50ea3960281d7f5c74cdcd5b6aa7804-init/diff:/var/lib/docker/overlay2/4e9e34582c8fac27b8acdffb5ccaf9d8b30c2dae25a1b3b2b79fa116bc7d16cb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d312c914cb3c70debf4b39ba5376f977a50ea3960281d7f5c74cdcd5b6aa7804/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d312c914cb3c70debf4b39ba5376f977a50ea3960281d7f5c74cdcd5b6aa7804/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d312c914cb3c70debf4b39ba5376f977a50ea3960281d7f5c74cdcd5b6aa7804/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-140475",
	                "Source": "/var/lib/docker/volumes/functional-140475/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-140475",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-140475",
	                "name.minikube.sigs.k8s.io": "functional-140475",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3ce5a484316b053698273bb74a7bfdcf5e2405e0d4a8e758d9e2edbdb00445ff",
	            "SandboxKey": "/var/run/docker/netns/3ce5a484316b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33143"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33144"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33147"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33145"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33146"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-140475": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "de:bd:9a:64:d3:4a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c192c78e034e0a71a0e148767e9b0ec7ae14d2f5e09e1cfa298441ea22bbe0e5",
	                    "EndpointID": "b362f57131db43ce06461506a6aa968ca551222e3cb2b1e2a1609968c677929a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-140475",
	                        "f779030ae61f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-140475 -n functional-140475
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-140475 logs -n 25: (1.310275657s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                            ARGS                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ config  │ functional-140475 config get cpus                                                                                          │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:25 UTC │ 08 Sep 25 12:25 UTC │
	│ config  │ functional-140475 config unset cpus                                                                                        │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:25 UTC │ 08 Sep 25 12:25 UTC │
	│ ssh     │ functional-140475 ssh -n functional-140475 sudo cat /home/docker/cp-test.txt                                               │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:25 UTC │ 08 Sep 25 12:25 UTC │
	│ config  │ functional-140475 config get cpus                                                                                          │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:25 UTC │                     │
	│ ssh     │ functional-140475 ssh echo hello                                                                                           │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:25 UTC │ 08 Sep 25 12:25 UTC │
	│ cp      │ functional-140475 cp functional-140475:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2204074601/001/cp-test.txt │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:25 UTC │ 08 Sep 25 12:25 UTC │
	│ ssh     │ functional-140475 ssh cat /etc/hostname                                                                                    │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:25 UTC │ 08 Sep 25 12:25 UTC │
	│ ssh     │ functional-140475 ssh -n functional-140475 sudo cat /home/docker/cp-test.txt                                               │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:25 UTC │ 08 Sep 25 12:25 UTC │
	│ tunnel  │ functional-140475 tunnel --alsologtostderr                                                                                 │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:25 UTC │                     │
	│ tunnel  │ functional-140475 tunnel --alsologtostderr                                                                                 │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:25 UTC │                     │
	│ cp      │ functional-140475 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                  │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:25 UTC │ 08 Sep 25 12:25 UTC │
	│ ssh     │ functional-140475 ssh -n functional-140475 sudo cat /tmp/does/not/exist/cp-test.txt                                        │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:25 UTC │ 08 Sep 25 12:25 UTC │
	│ tunnel  │ functional-140475 tunnel --alsologtostderr                                                                                 │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:25 UTC │                     │
	│ addons  │ functional-140475 addons list                                                                                              │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:25 UTC │ 08 Sep 25 12:25 UTC │
	│ addons  │ functional-140475 addons list -o json                                                                                      │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:25 UTC │ 08 Sep 25 12:25 UTC │
	│ service │ functional-140475 service list                                                                                             │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:35 UTC │ 08 Sep 25 12:35 UTC │
	│ service │ functional-140475 service list -o json                                                                                     │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:35 UTC │ 08 Sep 25 12:35 UTC │
	│ service │ functional-140475 service --namespace=default --https --url hello-node                                                     │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:35 UTC │ 08 Sep 25 12:35 UTC │
	│ service │ functional-140475 service hello-node --url --format={{.IP}}                                                                │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:35 UTC │ 08 Sep 25 12:35 UTC │
	│ service │ functional-140475 service hello-node --url                                                                                 │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:35 UTC │ 08 Sep 25 12:35 UTC │
	│ mount   │ -p functional-140475 /tmp/TestFunctionalparallelMountCmdany-port2268302564/001:/mount-9p --alsologtostderr -v=1            │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:35 UTC │                     │
	│ ssh     │ functional-140475 ssh findmnt -T /mount-9p | grep 9p                                                                       │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:35 UTC │                     │
	│ ssh     │ functional-140475 ssh findmnt -T /mount-9p | grep 9p                                                                       │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:35 UTC │ 08 Sep 25 12:35 UTC │
	│ ssh     │ functional-140475 ssh -- ls -la /mount-9p                                                                                  │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:35 UTC │ 08 Sep 25 12:35 UTC │
	│ ssh     │ functional-140475 ssh cat /mount-9p/test-1757334947499765445                                                               │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:35 UTC │ 08 Sep 25 12:35 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 12:24:38
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 12:24:38.805779  307961 out.go:360] Setting OutFile to fd 1 ...
	I0908 12:24:38.805904  307961 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:24:38.805908  307961 out.go:374] Setting ErrFile to fd 2...
	I0908 12:24:38.805912  307961 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:24:38.806275  307961 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-272936/.minikube/bin
	I0908 12:24:38.807269  307961 out.go:368] Setting JSON to false
	I0908 12:24:38.808261  307961 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7629,"bootTime":1757326650,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0908 12:24:38.808322  307961 start.go:140] virtualization:  
	I0908 12:24:38.813681  307961 out.go:179] * [functional-140475] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	I0908 12:24:38.816646  307961 out.go:179]   - MINIKUBE_LOCATION=21508
	I0908 12:24:38.816757  307961 notify.go:220] Checking for updates...
	I0908 12:24:38.822592  307961 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 12:24:38.825555  307961 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21508-272936/kubeconfig
	I0908 12:24:38.828314  307961 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-272936/.minikube
	I0908 12:24:38.831112  307961 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0908 12:24:38.834107  307961 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 12:24:38.837571  307961 config.go:182] Loaded profile config "functional-140475": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 12:24:38.837657  307961 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 12:24:38.870196  307961 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 12:24:38.870327  307961 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 12:24:38.946560  307961 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-09-08 12:24:38.936789289 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 12:24:38.946672  307961 docker.go:318] overlay module found
	I0908 12:24:38.949850  307961 out.go:179] * Using the docker driver based on existing profile
	I0908 12:24:38.952858  307961 start.go:304] selected driver: docker
	I0908 12:24:38.952869  307961 start.go:918] validating driver "docker" against &{Name:functional-140475 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-140475 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 12:24:38.952972  307961 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 12:24:38.953074  307961 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 12:24:39.015765  307961 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-09-08 12:24:39.005891577 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 12:24:39.022160  307961 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 12:24:39.022186  307961 cni.go:84] Creating CNI manager for ""
	I0908 12:24:39.022255  307961 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0908 12:24:39.022319  307961 start.go:348] cluster config:
	{Name:functional-140475 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-140475 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 12:24:39.025507  307961 out.go:179] * Starting "functional-140475" primary control-plane node in "functional-140475" cluster
	I0908 12:24:39.028338  307961 cache.go:123] Beginning downloading kic base image for docker with docker
	I0908 12:24:39.031177  307961 out.go:179] * Pulling base image v0.0.47-1756980985-21488 ...
	I0908 12:24:39.033923  307961 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0908 12:24:39.033970  307961 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21508-272936/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-arm64.tar.lz4
	I0908 12:24:39.033978  307961 cache.go:58] Caching tarball of preloaded images
	I0908 12:24:39.034007  307961 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon
	I0908 12:24:39.034063  307961 preload.go:172] Found /home/jenkins/minikube-integration/21508-272936/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0908 12:24:39.034072  307961 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0908 12:24:39.034187  307961 profile.go:143] Saving config to /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/functional-140475/config.json ...
	I0908 12:24:39.053977  307961 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon, skipping pull
	I0908 12:24:39.053988  307961 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 exists in daemon, skipping load
	I0908 12:24:39.054001  307961 cache.go:232] Successfully downloaded all kic artifacts
	I0908 12:24:39.054022  307961 start.go:360] acquireMachinesLock for functional-140475: {Name:mk6b5e0f12e93a7e43a3198f394e5ecd19765868 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 12:24:39.054075  307961 start.go:364] duration metric: took 37.038µs to acquireMachinesLock for "functional-140475"
	I0908 12:24:39.054093  307961 start.go:96] Skipping create...Using existing machine configuration
	I0908 12:24:39.054104  307961 fix.go:54] fixHost starting: 
	I0908 12:24:39.054404  307961 cli_runner.go:164] Run: docker container inspect functional-140475 --format={{.State.Status}}
	I0908 12:24:39.070636  307961 fix.go:112] recreateIfNeeded on functional-140475: state=Running err=<nil>
	W0908 12:24:39.070655  307961 fix.go:138] unexpected machine state, will restart: <nil>
	I0908 12:24:39.073856  307961 out.go:252] * Updating the running docker "functional-140475" container ...
	I0908 12:24:39.073883  307961 machine.go:93] provisionDockerMachine start ...
	I0908 12:24:39.073972  307961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-140475
	I0908 12:24:39.095706  307961 main.go:141] libmachine: Using SSH client type: native
	I0908 12:24:39.096021  307961 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I0908 12:24:39.096028  307961 main.go:141] libmachine: About to run SSH command:
	hostname
	I0908 12:24:39.219506  307961 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-140475
	
	I0908 12:24:39.219522  307961 ubuntu.go:182] provisioning hostname "functional-140475"
	I0908 12:24:39.219593  307961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-140475
	I0908 12:24:39.237514  307961 main.go:141] libmachine: Using SSH client type: native
	I0908 12:24:39.237826  307961 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I0908 12:24:39.237836  307961 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-140475 && echo "functional-140475" | sudo tee /etc/hostname
	I0908 12:24:39.376149  307961 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-140475
	
	I0908 12:24:39.376231  307961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-140475
	I0908 12:24:39.394454  307961 main.go:141] libmachine: Using SSH client type: native
	I0908 12:24:39.394772  307961 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I0908 12:24:39.394787  307961 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-140475' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-140475/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-140475' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0908 12:24:39.526330  307961 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0908 12:24:39.526364  307961 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21508-272936/.minikube CaCertPath:/home/jenkins/minikube-integration/21508-272936/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21508-272936/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21508-272936/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21508-272936/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21508-272936/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21508-272936/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21508-272936/.minikube}
	I0908 12:24:39.526386  307961 ubuntu.go:190] setting up certificates
	I0908 12:24:39.526395  307961 provision.go:84] configureAuth start
	I0908 12:24:39.526473  307961 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-140475
	I0908 12:24:39.544256  307961 provision.go:143] copyHostCerts
	I0908 12:24:39.544312  307961 exec_runner.go:144] found /home/jenkins/minikube-integration/21508-272936/.minikube/ca.pem, removing ...
	I0908 12:24:39.544333  307961 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21508-272936/.minikube/ca.pem
	I0908 12:24:39.544396  307961 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21508-272936/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21508-272936/.minikube/ca.pem (1078 bytes)
	I0908 12:24:39.544485  307961 exec_runner.go:144] found /home/jenkins/minikube-integration/21508-272936/.minikube/cert.pem, removing ...
	I0908 12:24:39.544489  307961 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21508-272936/.minikube/cert.pem
	I0908 12:24:39.544514  307961 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21508-272936/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21508-272936/.minikube/cert.pem (1123 bytes)
	I0908 12:24:39.544562  307961 exec_runner.go:144] found /home/jenkins/minikube-integration/21508-272936/.minikube/key.pem, removing ...
	I0908 12:24:39.544565  307961 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21508-272936/.minikube/key.pem
	I0908 12:24:39.544587  307961 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21508-272936/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21508-272936/.minikube/key.pem (1679 bytes)
	I0908 12:24:39.544630  307961 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21508-272936/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21508-272936/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21508-272936/.minikube/certs/ca-key.pem org=jenkins.functional-140475 san=[127.0.0.1 192.168.49.2 functional-140475 localhost minikube]
	I0908 12:24:40.206452  307961 provision.go:177] copyRemoteCerts
	I0908 12:24:40.206513  307961 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0908 12:24:40.206561  307961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-140475
	I0908 12:24:40.224526  307961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21508-272936/.minikube/machines/functional-140475/id_rsa Username:docker}
	I0908 12:24:40.317369  307961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-272936/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0908 12:24:40.343589  307961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-272936/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0908 12:24:40.374905  307961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-272936/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0908 12:24:40.404704  307961 provision.go:87] duration metric: took 878.296678ms to configureAuth
	I0908 12:24:40.404722  307961 ubuntu.go:206] setting minikube options for container-runtime
	I0908 12:24:40.404930  307961 config.go:182] Loaded profile config "functional-140475": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 12:24:40.404987  307961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-140475
	I0908 12:24:40.427465  307961 main.go:141] libmachine: Using SSH client type: native
	I0908 12:24:40.427769  307961 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I0908 12:24:40.427777  307961 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0908 12:24:40.553335  307961 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0908 12:24:40.553346  307961 ubuntu.go:71] root file system type: overlay
	I0908 12:24:40.553467  307961 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0908 12:24:40.553534  307961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-140475
	I0908 12:24:40.572469  307961 main.go:141] libmachine: Using SSH client type: native
	I0908 12:24:40.572772  307961 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I0908 12:24:40.572847  307961 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0908 12:24:40.708472  307961 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0908 12:24:40.708551  307961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-140475
	I0908 12:24:40.726963  307961 main.go:141] libmachine: Using SSH client type: native
	I0908 12:24:40.727265  307961 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I0908 12:24:40.727280  307961 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0908 12:24:40.857687  307961 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0908 12:24:40.857700  307961 machine.go:96] duration metric: took 1.783809767s to provisionDockerMachine
	I0908 12:24:40.857710  307961 start.go:293] postStartSetup for "functional-140475" (driver="docker")
	I0908 12:24:40.857720  307961 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0908 12:24:40.857815  307961 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0908 12:24:40.857856  307961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-140475
	I0908 12:24:40.876555  307961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21508-272936/.minikube/machines/functional-140475/id_rsa Username:docker}
	I0908 12:24:40.969373  307961 ssh_runner.go:195] Run: cat /etc/os-release
	I0908 12:24:40.972964  307961 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0908 12:24:40.972987  307961 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0908 12:24:40.972996  307961 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0908 12:24:40.973002  307961 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0908 12:24:40.973011  307961 filesync.go:126] Scanning /home/jenkins/minikube-integration/21508-272936/.minikube/addons for local assets ...
	I0908 12:24:40.973067  307961 filesync.go:126] Scanning /home/jenkins/minikube-integration/21508-272936/.minikube/files for local assets ...
	I0908 12:24:40.973153  307961 filesync.go:149] local asset: /home/jenkins/minikube-integration/21508-272936/.minikube/files/etc/ssl/certs/2747962.pem -> 2747962.pem in /etc/ssl/certs
	I0908 12:24:40.973232  307961 filesync.go:149] local asset: /home/jenkins/minikube-integration/21508-272936/.minikube/files/etc/test/nested/copy/274796/hosts -> hosts in /etc/test/nested/copy/274796
	I0908 12:24:40.973277  307961 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/274796
	I0908 12:24:40.982259  307961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-272936/.minikube/files/etc/ssl/certs/2747962.pem --> /etc/ssl/certs/2747962.pem (1708 bytes)
	I0908 12:24:41.008450  307961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-272936/.minikube/files/etc/test/nested/copy/274796/hosts --> /etc/test/nested/copy/274796/hosts (40 bytes)
	I0908 12:24:41.042546  307961 start.go:296] duration metric: took 184.820828ms for postStartSetup
	I0908 12:24:41.042641  307961 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 12:24:41.042685  307961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-140475
	I0908 12:24:41.060425  307961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21508-272936/.minikube/machines/functional-140475/id_rsa Username:docker}
	I0908 12:24:41.149190  307961 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0908 12:24:41.154107  307961 fix.go:56] duration metric: took 2.100003075s for fixHost
	I0908 12:24:41.154123  307961 start.go:83] releasing machines lock for "functional-140475", held for 2.100040803s
	I0908 12:24:41.154187  307961 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-140475
	I0908 12:24:41.170748  307961 ssh_runner.go:195] Run: cat /version.json
	I0908 12:24:41.170807  307961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-140475
	I0908 12:24:41.171122  307961 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0908 12:24:41.171175  307961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-140475
	I0908 12:24:41.200909  307961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21508-272936/.minikube/machines/functional-140475/id_rsa Username:docker}
	I0908 12:24:41.201658  307961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21508-272936/.minikube/machines/functional-140475/id_rsa Username:docker}
	I0908 12:24:41.409551  307961 ssh_runner.go:195] Run: systemctl --version
	I0908 12:24:41.413819  307961 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0908 12:24:41.418199  307961 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0908 12:24:41.435960  307961 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0908 12:24:41.436036  307961 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 12:24:41.445590  307961 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0908 12:24:41.445610  307961 start.go:495] detecting cgroup driver to use...
	I0908 12:24:41.445654  307961 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0908 12:24:41.445751  307961 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0908 12:24:41.462870  307961 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0908 12:24:41.475459  307961 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0908 12:24:41.486009  307961 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0908 12:24:41.486079  307961 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0908 12:24:41.496429  307961 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0908 12:24:41.509909  307961 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0908 12:24:41.525577  307961 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0908 12:24:41.537217  307961 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0908 12:24:41.548235  307961 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0908 12:24:41.559195  307961 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0908 12:24:41.571795  307961 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0908 12:24:41.583341  307961 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0908 12:24:41.592890  307961 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0908 12:24:41.602220  307961 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 12:24:41.721513  307961 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0908 12:24:41.953565  307961 start.go:495] detecting cgroup driver to use...
	I0908 12:24:41.953605  307961 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0908 12:24:41.953653  307961 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0908 12:24:41.970094  307961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0908 12:24:41.984040  307961 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0908 12:24:42.003881  307961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0908 12:24:42.028519  307961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0908 12:24:42.042258  307961 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0908 12:24:42.064216  307961 ssh_runner.go:195] Run: which cri-dockerd
	I0908 12:24:42.068678  307961 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0908 12:24:42.079484  307961 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0908 12:24:42.104657  307961 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0908 12:24:42.224934  307961 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0908 12:24:42.345867  307961 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I0908 12:24:42.345965  307961 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0908 12:24:42.367785  307961 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0908 12:24:42.379947  307961 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 12:24:42.490097  307961 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0908 12:25:08.449575  307961 ssh_runner.go:235] Completed: sudo systemctl restart docker: (25.959453779s)
	I0908 12:25:08.449637  307961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0908 12:25:08.463783  307961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0908 12:25:08.477783  307961 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0908 12:25:08.502065  307961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0908 12:25:08.515472  307961 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0908 12:25:08.611793  307961 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0908 12:25:08.706944  307961 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 12:25:08.807621  307961 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0908 12:25:08.822702  307961 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0908 12:25:08.834746  307961 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 12:25:08.927715  307961 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0908 12:25:09.006840  307961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0908 12:25:09.022281  307961 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0908 12:25:09.022341  307961 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0908 12:25:09.026672  307961 start.go:563] Will wait 60s for crictl version
	I0908 12:25:09.026729  307961 ssh_runner.go:195] Run: which crictl
	I0908 12:25:09.030168  307961 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0908 12:25:09.069408  307961 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0908 12:25:09.069468  307961 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0908 12:25:09.091988  307961 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0908 12:25:09.119701  307961 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0908 12:25:09.119784  307961 cli_runner.go:164] Run: docker network inspect functional-140475 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0908 12:25:09.135966  307961 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0908 12:25:09.142804  307961 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0908 12:25:09.145841  307961 kubeadm.go:875] updating cluster {Name:functional-140475 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-140475 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0908 12:25:09.145965  307961 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0908 12:25:09.146046  307961 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0908 12:25:09.165618  307961 docker.go:691] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-140475
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I0908 12:25:09.165633  307961 docker.go:621] Images already preloaded, skipping extraction
	I0908 12:25:09.165703  307961 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0908 12:25:09.184547  307961 docker.go:691] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-140475
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I0908 12:25:09.184562  307961 cache_images.go:85] Images are preloaded, skipping loading
	I0908 12:25:09.184570  307961 kubeadm.go:926] updating node { 192.168.49.2 8441 v1.34.0 docker true true} ...
	I0908 12:25:09.184676  307961 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-140475 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:functional-140475 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0908 12:25:09.184743  307961 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0908 12:25:09.234162  307961 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0908 12:25:09.234182  307961 cni.go:84] Creating CNI manager for ""
	I0908 12:25:09.234208  307961 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0908 12:25:09.234215  307961 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0908 12:25:09.234234  307961 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-140475 NodeName:functional-140475 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:
map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0908 12:25:09.234346  307961 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-140475"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0908 12:25:09.234409  307961 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0908 12:25:09.243152  307961 binaries.go:44] Found k8s binaries, skipping transfer
	I0908 12:25:09.243228  307961 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0908 12:25:09.252021  307961 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0908 12:25:09.271146  307961 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0908 12:25:09.289010  307961 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2068 bytes)
	I0908 12:25:09.307402  307961 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0908 12:25:09.310777  307961 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 12:25:09.402009  307961 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 12:25:09.413396  307961 certs.go:68] Setting up /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/functional-140475 for IP: 192.168.49.2
	I0908 12:25:09.413407  307961 certs.go:194] generating shared ca certs ...
	I0908 12:25:09.413422  307961 certs.go:226] acquiring lock for ca certs: {Name:mkab0eab768f036514950b55081b45acf0f9ba87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:25:09.413551  307961 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21508-272936/.minikube/ca.key
	I0908 12:25:09.413586  307961 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21508-272936/.minikube/proxy-client-ca.key
	I0908 12:25:09.413592  307961 certs.go:256] generating profile certs ...
	I0908 12:25:09.413672  307961 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/functional-140475/client.key
	I0908 12:25:09.413719  307961 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/functional-140475/apiserver.key.e6897943
	I0908 12:25:09.413752  307961 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/functional-140475/proxy-client.key
	I0908 12:25:09.413864  307961 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-272936/.minikube/certs/274796.pem (1338 bytes)
	W0908 12:25:09.413888  307961 certs.go:480] ignoring /home/jenkins/minikube-integration/21508-272936/.minikube/certs/274796_empty.pem, impossibly tiny 0 bytes
	I0908 12:25:09.413895  307961 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-272936/.minikube/certs/ca-key.pem (1675 bytes)
	I0908 12:25:09.413920  307961 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-272936/.minikube/certs/ca.pem (1078 bytes)
	I0908 12:25:09.413943  307961 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-272936/.minikube/certs/cert.pem (1123 bytes)
	I0908 12:25:09.413965  307961 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-272936/.minikube/certs/key.pem (1679 bytes)
	I0908 12:25:09.414005  307961 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-272936/.minikube/files/etc/ssl/certs/2747962.pem (1708 bytes)
	I0908 12:25:09.414669  307961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-272936/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0908 12:25:09.438677  307961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-272936/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0908 12:25:09.462973  307961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-272936/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0908 12:25:09.487126  307961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-272936/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0908 12:25:09.523201  307961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/functional-140475/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0908 12:25:09.564576  307961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/functional-140475/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0908 12:25:09.609080  307961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/functional-140475/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0908 12:25:09.660398  307961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/functional-140475/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0908 12:25:09.722557  307961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-272936/.minikube/files/etc/ssl/certs/2747962.pem --> /usr/share/ca-certificates/2747962.pem (1708 bytes)
	I0908 12:25:09.795191  307961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-272936/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0908 12:25:09.852043  307961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-272936/.minikube/certs/274796.pem --> /usr/share/ca-certificates/274796.pem (1338 bytes)
	I0908 12:25:09.929325  307961 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0908 12:25:09.956294  307961 ssh_runner.go:195] Run: openssl version
	I0908 12:25:09.964741  307961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2747962.pem && ln -fs /usr/share/ca-certificates/2747962.pem /etc/ssl/certs/2747962.pem"
	I0908 12:25:09.981038  307961 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2747962.pem
	I0908 12:25:09.987492  307961 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  8 12:22 /usr/share/ca-certificates/2747962.pem
	I0908 12:25:09.987560  307961 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2747962.pem
	I0908 12:25:10.012655  307961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2747962.pem /etc/ssl/certs/3ec20f2e.0"
	I0908 12:25:10.042426  307961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0908 12:25:10.058375  307961 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0908 12:25:10.062540  307961 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  8 12:16 /usr/share/ca-certificates/minikubeCA.pem
	I0908 12:25:10.062596  307961 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0908 12:25:10.072176  307961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0908 12:25:10.087355  307961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/274796.pem && ln -fs /usr/share/ca-certificates/274796.pem /etc/ssl/certs/274796.pem"
	I0908 12:25:10.103595  307961 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/274796.pem
	I0908 12:25:10.108535  307961 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  8 12:22 /usr/share/ca-certificates/274796.pem
	I0908 12:25:10.108593  307961 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/274796.pem
	I0908 12:25:10.119367  307961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/274796.pem /etc/ssl/certs/51391683.0"
	I0908 12:25:10.130528  307961 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0908 12:25:10.137359  307961 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0908 12:25:10.166539  307961 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0908 12:25:10.174771  307961 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0908 12:25:10.183637  307961 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0908 12:25:10.194714  307961 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0908 12:25:10.202219  307961 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0908 12:25:10.213944  307961 kubeadm.go:392] StartCluster: {Name:functional-140475 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-140475 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 12:25:10.214072  307961 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0908 12:25:10.257105  307961 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0908 12:25:10.270314  307961 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0908 12:25:10.270323  307961 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0908 12:25:10.270373  307961 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0908 12:25:10.287316  307961 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0908 12:25:10.287844  307961 kubeconfig.go:125] found "functional-140475" server: "https://192.168.49.2:8441"
	I0908 12:25:10.289121  307961 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0908 12:25:10.302693  307961 kubeadm.go:636] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-09-08 12:22:43.060573313 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-09-08 12:25:09.302462360 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I0908 12:25:10.303352  307961 kubeadm.go:1152] stopping kube-system containers ...
	I0908 12:25:10.303417  307961 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0908 12:25:10.356603  307961 docker.go:484] Stopping containers: [4ce992834f47 ddb7ba696cfb 0101077a9928 490cadb2a355 ae7efbf232ca 09a635f61e28 e1c898d52b18 616b3f6374b7 6bd6d37f8ca1 e96a79d42555 44659d7c7d60 02ea4d507b87 30445a15d3f7 b4bbd0c7e775 5a5f8092cbe5 624d0c2700e9 b0328378d329 fd9d24c6886a 81682e275a9e f043527a68f4 5d4262a965e8 84d41cb3fb65 52d38dc4b252 cde196e9b1bb 031d8bd724fc b5b728e886d2 a09328718e6c 1bfb1c6555aa 125b6fef9ca7 55602dd893bc 2cdf0389c305]
	I0908 12:25:10.356683  307961 ssh_runner.go:195] Run: docker stop 4ce992834f47 ddb7ba696cfb 0101077a9928 490cadb2a355 ae7efbf232ca 09a635f61e28 e1c898d52b18 616b3f6374b7 6bd6d37f8ca1 e96a79d42555 44659d7c7d60 02ea4d507b87 30445a15d3f7 b4bbd0c7e775 5a5f8092cbe5 624d0c2700e9 b0328378d329 fd9d24c6886a 81682e275a9e f043527a68f4 5d4262a965e8 84d41cb3fb65 52d38dc4b252 cde196e9b1bb 031d8bd724fc b5b728e886d2 a09328718e6c 1bfb1c6555aa 125b6fef9ca7 55602dd893bc 2cdf0389c305
	I0908 12:25:10.845271  307961 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0908 12:25:10.970034  307961 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0908 12:25:10.986122  307961 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5631 Sep  8 12:22 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Sep  8 12:22 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1972 Sep  8 12:22 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Sep  8 12:22 /etc/kubernetes/scheduler.conf
	
	I0908 12:25:10.986188  307961 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0908 12:25:10.999197  307961 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0908 12:25:11.014257  307961 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0908 12:25:11.014316  307961 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0908 12:25:11.025509  307961 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0908 12:25:11.035696  307961 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0908 12:25:11.035751  307961 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0908 12:25:11.047458  307961 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0908 12:25:11.064869  307961 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0908 12:25:11.064936  307961 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0908 12:25:11.077313  307961 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0908 12:25:11.090284  307961 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0908 12:25:11.149610  307961 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0908 12:25:13.846596  307961 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.696960077s)
	I0908 12:25:13.846621  307961 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0908 12:25:14.023809  307961 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0908 12:25:14.084462  307961 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0908 12:25:14.157768  307961 api_server.go:52] waiting for apiserver process to appear ...
	I0908 12:25:14.157834  307961 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 12:25:14.658730  307961 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 12:25:15.158925  307961 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 12:25:15.657954  307961 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 12:25:15.684517  307961 api_server.go:72] duration metric: took 1.526754153s to wait for apiserver process to appear ...
	I0908 12:25:15.684531  307961 api_server.go:88] waiting for apiserver healthz status ...
	I0908 12:25:15.684550  307961 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0908 12:25:19.225894  307961 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0908 12:25:19.225912  307961 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0908 12:25:19.225924  307961 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0908 12:25:19.282866  307961 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0908 12:25:19.282883  307961 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0908 12:25:19.685226  307961 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0908 12:25:19.694476  307961 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0908 12:25:19.694491  307961 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0908 12:25:20.184632  307961 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0908 12:25:20.199934  307961 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0908 12:25:20.199950  307961 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0908 12:25:20.685311  307961 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0908 12:25:20.694963  307961 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I0908 12:25:20.708729  307961 api_server.go:141] control plane version: v1.34.0
	I0908 12:25:20.708745  307961 api_server.go:131] duration metric: took 5.024208217s to wait for apiserver health ...
	I0908 12:25:20.708753  307961 cni.go:84] Creating CNI manager for ""
	I0908 12:25:20.708763  307961 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0908 12:25:20.712167  307961 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I0908 12:25:20.715042  307961 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0908 12:25:20.742203  307961 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0908 12:25:20.787329  307961 system_pods.go:43] waiting for kube-system pods to appear ...
	I0908 12:25:20.791721  307961 system_pods.go:59] 7 kube-system pods found
	I0908 12:25:20.791752  307961 system_pods.go:61] "coredns-66bc5c9577-p79xg" [4e573d3c-eae9-4bd0-9f33-5b8c667ad3d3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 12:25:20.791760  307961 system_pods.go:61] "etcd-functional-140475" [19b5cd07-26aa-4184-9fa9-fa1945b3f3b4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0908 12:25:20.791767  307961 system_pods.go:61] "kube-apiserver-functional-140475" [1331084f-dab5-460b-b092-7b94d68ddd6b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0908 12:25:20.791773  307961 system_pods.go:61] "kube-controller-manager-functional-140475" [a426e33d-fe9b-4438-a5e6-ee5ea4f03b5f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0908 12:25:20.791778  307961 system_pods.go:61] "kube-proxy-mtw87" [5acd2352-8d2e-41fb-9d3a-e5dec0168fdc] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0908 12:25:20.791784  307961 system_pods.go:61] "kube-scheduler-functional-140475" [ebd55573-2475-4e76-ac63-a49d1144d78a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0908 12:25:20.791789  307961 system_pods.go:61] "storage-provisioner" [392a6faa-270d-49b3-8967-84c4e22fe60d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0908 12:25:20.791794  307961 system_pods.go:74] duration metric: took 4.455237ms to wait for pod list to return data ...
	I0908 12:25:20.791801  307961 node_conditions.go:102] verifying NodePressure condition ...
	I0908 12:25:20.794649  307961 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0908 12:25:20.794668  307961 node_conditions.go:123] node cpu capacity is 2
	I0908 12:25:20.794688  307961 node_conditions.go:105] duration metric: took 2.883386ms to run NodePressure ...
	I0908 12:25:20.794703  307961 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0908 12:25:21.097788  307961 kubeadm.go:720] waiting for restarted kubelet to initialise ...
	I0908 12:25:21.101921  307961 kubeadm.go:735] kubelet initialised
	I0908 12:25:21.101942  307961 kubeadm.go:736] duration metric: took 4.122294ms waiting for restarted kubelet to initialise ...
	I0908 12:25:21.101963  307961 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0908 12:25:21.111200  307961 ops.go:34] apiserver oom_adj: -16
	I0908 12:25:21.111219  307961 kubeadm.go:593] duration metric: took 10.840884442s to restartPrimaryControlPlane
	I0908 12:25:21.111232  307961 kubeadm.go:394] duration metric: took 10.897292619s to StartCluster
	I0908 12:25:21.111247  307961 settings.go:142] acquiring lock: {Name:mk841a706da4c4a4fb8ce124add06cca32768f30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:25:21.111322  307961 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21508-272936/kubeconfig
	I0908 12:25:21.112165  307961 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-272936/kubeconfig: {Name:mk688057e1b67a1163f63dcfb98bef59b0d5e043 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:25:21.112699  307961 config.go:182] Loaded profile config "functional-140475": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 12:25:21.112487  307961 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0908 12:25:21.112800  307961 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0908 12:25:21.113081  307961 addons.go:69] Setting storage-provisioner=true in profile "functional-140475"
	I0908 12:25:21.113096  307961 addons.go:238] Setting addon storage-provisioner=true in "functional-140475"
	W0908 12:25:21.113101  307961 addons.go:247] addon storage-provisioner should already be in state true
	I0908 12:25:21.113126  307961 host.go:66] Checking if "functional-140475" exists ...
	I0908 12:25:21.113184  307961 addons.go:69] Setting default-storageclass=true in profile "functional-140475"
	I0908 12:25:21.113199  307961 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-140475"
	I0908 12:25:21.113578  307961 cli_runner.go:164] Run: docker container inspect functional-140475 --format={{.State.Status}}
	I0908 12:25:21.113593  307961 cli_runner.go:164] Run: docker container inspect functional-140475 --format={{.State.Status}}
	I0908 12:25:21.118638  307961 out.go:179] * Verifying Kubernetes components...
	I0908 12:25:21.121537  307961 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 12:25:21.146308  307961 addons.go:238] Setting addon default-storageclass=true in "functional-140475"
	W0908 12:25:21.146323  307961 addons.go:247] addon default-storageclass should already be in state true
	I0908 12:25:21.146350  307961 host.go:66] Checking if "functional-140475" exists ...
	I0908 12:25:21.146846  307961 cli_runner.go:164] Run: docker container inspect functional-140475 --format={{.State.Status}}
	I0908 12:25:21.159299  307961 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0908 12:25:21.162327  307961 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 12:25:21.162339  307961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0908 12:25:21.162416  307961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-140475
	I0908 12:25:21.187233  307961 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0908 12:25:21.187251  307961 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0908 12:25:21.187317  307961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-140475
	I0908 12:25:21.229851  307961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21508-272936/.minikube/machines/functional-140475/id_rsa Username:docker}
	I0908 12:25:21.241810  307961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21508-272936/.minikube/machines/functional-140475/id_rsa Username:docker}
	I0908 12:25:21.355551  307961 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 12:25:21.394959  307961 node_ready.go:35] waiting up to 6m0s for node "functional-140475" to be "Ready" ...
	I0908 12:25:21.399803  307961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 12:25:21.400739  307961 node_ready.go:49] node "functional-140475" is "Ready"
	I0908 12:25:21.400752  307961 node_ready.go:38] duration metric: took 5.76412ms for node "functional-140475" to be "Ready" ...
	I0908 12:25:21.400767  307961 api_server.go:52] waiting for apiserver process to appear ...
	I0908 12:25:21.400814  307961 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 12:25:21.462519  307961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0908 12:25:22.417655  307961 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.017826127s)
	I0908 12:25:22.417705  307961 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.016882263s)
	I0908 12:25:22.417716  307961 api_server.go:72] duration metric: took 1.304964591s to wait for apiserver process to appear ...
	I0908 12:25:22.417720  307961 api_server.go:88] waiting for apiserver healthz status ...
	I0908 12:25:22.417736  307961 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0908 12:25:22.430890  307961 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I0908 12:25:22.431744  307961 api_server.go:141] control plane version: v1.34.0
	I0908 12:25:22.431756  307961 api_server.go:131] duration metric: took 14.031034ms to wait for apiserver health ...
	I0908 12:25:22.431762  307961 system_pods.go:43] waiting for kube-system pods to appear ...
	I0908 12:25:22.432635  307961 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0908 12:25:22.434987  307961 system_pods.go:59] 7 kube-system pods found
	I0908 12:25:22.435003  307961 system_pods.go:61] "coredns-66bc5c9577-p79xg" [4e573d3c-eae9-4bd0-9f33-5b8c667ad3d3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 12:25:22.435009  307961 system_pods.go:61] "etcd-functional-140475" [19b5cd07-26aa-4184-9fa9-fa1945b3f3b4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0908 12:25:22.435018  307961 system_pods.go:61] "kube-apiserver-functional-140475" [1331084f-dab5-460b-b092-7b94d68ddd6b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0908 12:25:22.435024  307961 system_pods.go:61] "kube-controller-manager-functional-140475" [a426e33d-fe9b-4438-a5e6-ee5ea4f03b5f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0908 12:25:22.435028  307961 system_pods.go:61] "kube-proxy-mtw87" [5acd2352-8d2e-41fb-9d3a-e5dec0168fdc] Running
	I0908 12:25:22.435036  307961 system_pods.go:61] "kube-scheduler-functional-140475" [ebd55573-2475-4e76-ac63-a49d1144d78a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0908 12:25:22.435039  307961 system_pods.go:61] "storage-provisioner" [392a6faa-270d-49b3-8967-84c4e22fe60d] Running
	I0908 12:25:22.435045  307961 system_pods.go:74] duration metric: took 3.276793ms to wait for pod list to return data ...
	I0908 12:25:22.435051  307961 default_sa.go:34] waiting for default service account to be created ...
	I0908 12:25:22.435358  307961 addons.go:514] duration metric: took 1.322557707s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0908 12:25:22.437179  307961 default_sa.go:45] found service account: "default"
	I0908 12:25:22.437191  307961 default_sa.go:55] duration metric: took 2.135667ms for default service account to be created ...
	I0908 12:25:22.437198  307961 system_pods.go:116] waiting for k8s-apps to be running ...
	I0908 12:25:22.439815  307961 system_pods.go:86] 7 kube-system pods found
	I0908 12:25:22.439831  307961 system_pods.go:89] "coredns-66bc5c9577-p79xg" [4e573d3c-eae9-4bd0-9f33-5b8c667ad3d3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 12:25:22.439845  307961 system_pods.go:89] "etcd-functional-140475" [19b5cd07-26aa-4184-9fa9-fa1945b3f3b4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0908 12:25:22.439852  307961 system_pods.go:89] "kube-apiserver-functional-140475" [1331084f-dab5-460b-b092-7b94d68ddd6b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0908 12:25:22.439859  307961 system_pods.go:89] "kube-controller-manager-functional-140475" [a426e33d-fe9b-4438-a5e6-ee5ea4f03b5f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0908 12:25:22.439862  307961 system_pods.go:89] "kube-proxy-mtw87" [5acd2352-8d2e-41fb-9d3a-e5dec0168fdc] Running
	I0908 12:25:22.439867  307961 system_pods.go:89] "kube-scheduler-functional-140475" [ebd55573-2475-4e76-ac63-a49d1144d78a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0908 12:25:22.439870  307961 system_pods.go:89] "storage-provisioner" [392a6faa-270d-49b3-8967-84c4e22fe60d] Running
	I0908 12:25:22.439878  307961 system_pods.go:126] duration metric: took 2.673972ms to wait for k8s-apps to be running ...
	I0908 12:25:22.439884  307961 system_svc.go:44] waiting for kubelet service to be running ....
	I0908 12:25:22.439939  307961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 12:25:22.452700  307961 system_svc.go:56] duration metric: took 12.805279ms WaitForService to wait for kubelet
	I0908 12:25:22.452717  307961 kubeadm.go:578] duration metric: took 1.339964883s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 12:25:22.452735  307961 node_conditions.go:102] verifying NodePressure condition ...
	I0908 12:25:22.455620  307961 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0908 12:25:22.455636  307961 node_conditions.go:123] node cpu capacity is 2
	I0908 12:25:22.455646  307961 node_conditions.go:105] duration metric: took 2.906499ms to run NodePressure ...
	I0908 12:25:22.455667  307961 start.go:241] waiting for startup goroutines ...
	I0908 12:25:22.455674  307961 start.go:246] waiting for cluster config update ...
	I0908 12:25:22.455683  307961 start.go:255] writing updated cluster config ...
	I0908 12:25:22.456014  307961 ssh_runner.go:195] Run: rm -f paused
	I0908 12:25:22.459687  307961 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 12:25:22.463329  307961 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-p79xg" in "kube-system" namespace to be "Ready" or be gone ...
	W0908 12:25:24.468773  307961 pod_ready.go:104] pod "coredns-66bc5c9577-p79xg" is not "Ready", error: <nil>
	W0908 12:25:26.969474  307961 pod_ready.go:104] pod "coredns-66bc5c9577-p79xg" is not "Ready", error: <nil>
	W0908 12:25:28.969843  307961 pod_ready.go:104] pod "coredns-66bc5c9577-p79xg" is not "Ready", error: <nil>
	I0908 12:25:30.468629  307961 pod_ready.go:94] pod "coredns-66bc5c9577-p79xg" is "Ready"
	I0908 12:25:30.468643  307961 pod_ready.go:86] duration metric: took 8.005300659s for pod "coredns-66bc5c9577-p79xg" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:25:30.471126  307961 pod_ready.go:83] waiting for pod "etcd-functional-140475" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:25:30.976978  307961 pod_ready.go:94] pod "etcd-functional-140475" is "Ready"
	I0908 12:25:30.976993  307961 pod_ready.go:86] duration metric: took 505.854182ms for pod "etcd-functional-140475" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:25:30.979862  307961 pod_ready.go:83] waiting for pod "kube-apiserver-functional-140475" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:25:31.485298  307961 pod_ready.go:94] pod "kube-apiserver-functional-140475" is "Ready"
	I0908 12:25:31.485314  307961 pod_ready.go:86] duration metric: took 505.439843ms for pod "kube-apiserver-functional-140475" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:25:31.487862  307961 pod_ready.go:83] waiting for pod "kube-controller-manager-functional-140475" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:25:32.495602  307961 pod_ready.go:94] pod "kube-controller-manager-functional-140475" is "Ready"
	I0908 12:25:32.495616  307961 pod_ready.go:86] duration metric: took 1.007741045s for pod "kube-controller-manager-functional-140475" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:25:32.498132  307961 pod_ready.go:83] waiting for pod "kube-proxy-mtw87" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:25:32.867177  307961 pod_ready.go:94] pod "kube-proxy-mtw87" is "Ready"
	I0908 12:25:32.867192  307961 pod_ready.go:86] duration metric: took 369.046153ms for pod "kube-proxy-mtw87" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:25:33.067428  307961 pod_ready.go:83] waiting for pod "kube-scheduler-functional-140475" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:25:33.467201  307961 pod_ready.go:94] pod "kube-scheduler-functional-140475" is "Ready"
	I0908 12:25:33.467215  307961 pod_ready.go:86] duration metric: took 399.772898ms for pod "kube-scheduler-functional-140475" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:25:33.467225  307961 pod_ready.go:40] duration metric: took 11.007516594s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 12:25:33.519347  307961 start.go:617] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0908 12:25:33.522324  307961 out.go:179] * Done! kubectl is now configured to use "functional-140475" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 08 12:25:52 functional-140475 dockerd[6902]: time="2025-09-08T12:25:52.698145855Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 08 12:26:05 functional-140475 dockerd[6902]: time="2025-09-08T12:26:05.465184017Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 08 12:26:07 functional-140475 dockerd[6902]: time="2025-09-08T12:26:07.471049137Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 08 12:26:34 functional-140475 dockerd[6902]: time="2025-09-08T12:26:34.493180176Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 08 12:26:34 functional-140475 dockerd[6902]: time="2025-09-08T12:26:34.716486918Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 08 12:27:21 functional-140475 dockerd[6902]: time="2025-09-08T12:27:21.496496033Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 08 12:27:22 functional-140475 dockerd[6902]: time="2025-09-08T12:27:22.492748418Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 08 12:28:42 functional-140475 dockerd[6902]: time="2025-09-08T12:28:42.565548125Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 08 12:28:42 functional-140475 cri-dockerd[7648]: time="2025-09-08T12:28:42Z" level=info msg="Stop pulling image docker.io/nginx:latest: latest: Pulling from library/nginx"
	Sep 08 12:28:49 functional-140475 dockerd[6902]: time="2025-09-08T12:28:49.451371360Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 08 12:29:53 functional-140475 cri-dockerd[7648]: time="2025-09-08T12:29:53Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3b73d8658387f28a994a8d9cdfcf982200ec43222b6a9b96518e9a4a33d1b87e/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Sep 08 12:29:53 functional-140475 dockerd[6902]: time="2025-09-08T12:29:53.632232751Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 08 12:29:53 functional-140475 cri-dockerd[7648]: time="2025-09-08T12:29:53Z" level=info msg="Stop pulling image kicbase/echo-server:latest: latest: Pulling from kicbase/echo-server"
	Sep 08 12:30:05 functional-140475 dockerd[6902]: time="2025-09-08T12:30:05.465229664Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 08 12:30:32 functional-140475 dockerd[6902]: time="2025-09-08T12:30:32.526564944Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 08 12:31:27 functional-140475 dockerd[6902]: time="2025-09-08T12:31:27.578340096Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 08 12:31:27 functional-140475 cri-dockerd[7648]: time="2025-09-08T12:31:27Z" level=info msg="Stop pulling image docker.io/nginx:latest: latest: Pulling from library/nginx"
	Sep 08 12:31:27 functional-140475 dockerd[6902]: time="2025-09-08T12:31:27.798387542Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 08 12:31:40 functional-140475 dockerd[6902]: time="2025-09-08T12:31:40.486918678Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 08 12:32:49 functional-140475 dockerd[6902]: time="2025-09-08T12:32:49.469534294Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 08 12:35:36 functional-140475 cri-dockerd[7648]: time="2025-09-08T12:35:36Z" level=info msg="Stop pulling image kicbase/echo-server:latest: Status: Downloaded newer image for kicbase/echo-server:latest"
	Sep 08 12:35:50 functional-140475 cri-dockerd[7648]: time="2025-09-08T12:35:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e1729aa9858a3bbaafc3121ea7b119e5dcf06b6c86c709db72adbfd0167a211c/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Sep 08 12:35:52 functional-140475 cri-dockerd[7648]: time="2025-09-08T12:35:52Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	Sep 08 12:35:52 functional-140475 dockerd[6902]: time="2025-09-08T12:35:52.381166395Z" level=info msg="ignoring event" container=70117b128644e2e4767f5fbbfdc02ceeecd480efe2fdcb53a147bb5f55a75ea6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 08 12:35:53 functional-140475 dockerd[6902]: time="2025-09-08T12:35:53.609256393Z" level=info msg="ignoring event" container=e1729aa9858a3bbaafc3121ea7b119e5dcf06b6c86c709db72adbfd0167a211c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	70117b128644e       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   1 second ago        Exited              mount-munger              0                   e1729aa9858a3       busybox-mount
	6049e4afeaa34       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6           17 seconds ago      Running             echo-server               0                   3b73d8658387f       hello-node-75c85bcc94-x22bh
	a0f0bf25d6321       nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8                         10 minutes ago      Running             nginx                     0                   f7c16945d9528       nginx-svc
	28f6049f43b21       138784d87c9c5                                                                                         10 minutes ago      Running             coredns                   2                   baa450b79b795       coredns-66bc5c9577-p79xg
	ae4f0a6634637       6fc32d66c1411                                                                                         10 minutes ago      Running             kube-proxy                3                   4aa0c34c1b998       kube-proxy-mtw87
	3e56d761c8413       ba04bb24b9575                                                                                         10 minutes ago      Running             storage-provisioner       4                   5df3eb436cb71       storage-provisioner
	8e057b7f41de7       a25f5ef9c34c3                                                                                         10 minutes ago      Running             kube-scheduler            3                   02257bbd60ea0       kube-scheduler-functional-140475
	264ed758e3516       d291939e99406                                                                                         10 minutes ago      Running             kube-apiserver            0                   a12878798ff69       kube-apiserver-functional-140475
	ca1f2eb2a56e6       a1894772a478e                                                                                         10 minutes ago      Running             etcd                      2                   7876da18c12d9       etcd-functional-140475
	b5a5bff40e315       996be7e86d9b3                                                                                         10 minutes ago      Running             kube-controller-manager   3                   46117db363397       kube-controller-manager-functional-140475
	207c0c3df856b       996be7e86d9b3                                                                                         10 minutes ago      Created             kube-controller-manager   2                   ddb7ba696cfb7       kube-controller-manager-functional-140475
	152b108a85c33       a25f5ef9c34c3                                                                                         10 minutes ago      Created             kube-scheduler            2                   0101077a99284       kube-scheduler-functional-140475
	4ce992834f477       6fc32d66c1411                                                                                         10 minutes ago      Exited              kube-proxy                2                   e1c898d52b181       kube-proxy-mtw87
	6bd6d37f8ca18       ba04bb24b9575                                                                                         11 minutes ago      Exited              storage-provisioner       3                   5d4262a965e8d       storage-provisioner
	e96a79d425559       138784d87c9c5                                                                                         11 minutes ago      Exited              coredns                   1                   624d0c2700e94       coredns-66bc5c9577-p79xg
	02ea4d507b878       a1894772a478e                                                                                         11 minutes ago      Exited              etcd                      1                   f043527a68f42       etcd-functional-140475
	
	
	==> coredns [28f6049f43b2] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53675 - 53593 "HINFO IN 2100446065665056732.4545956877188492551. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.030972055s
	
	
	==> coredns [e96a79d42555] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50588 - 59981 "HINFO IN 3063204499314061978.8433951396937313554. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013894801s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-140475
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-140475
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3f6dd380c737091fd766d425b85ffa6c4f72b9ba
	                    minikube.k8s.io/name=functional-140475
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_08T12_23_00_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Sep 2025 12:22:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-140475
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Sep 2025 12:35:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Sep 2025 12:35:51 +0000   Mon, 08 Sep 2025 12:22:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Sep 2025 12:35:51 +0000   Mon, 08 Sep 2025 12:22:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Sep 2025 12:35:51 +0000   Mon, 08 Sep 2025 12:22:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Sep 2025 12:35:51 +0000   Mon, 08 Sep 2025 12:22:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-140475
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 28cc2cd20fe74a02bfe1586146117dde
	  System UUID:                a03639dc-39eb-4af1-8eff-ffc8a710a78a
	  Boot ID:                    3b69f852-7505-47f7-82de-581d66319e23
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-x22bh                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m2s
	  default                     hello-node-connect-7d85dfc575-t5bmg          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-p79xg                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-functional-140475                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kube-apiserver-functional-140475             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-140475    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-mtw87                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-140475             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (2%)  170Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 13m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 13m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node functional-140475 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node functional-140475 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node functional-140475 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node functional-140475 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node functional-140475 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m                kubelet          Node functional-140475 status is now: NodeHasSufficientPID
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           12m                node-controller  Node functional-140475 event: Registered Node functional-140475 in Controller
	  Warning  ContainerGCFailed        11m                kubelet          rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	  Normal   RegisteredNode           11m                node-controller  Node functional-140475 event: Registered Node functional-140475 in Controller
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-140475 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-140475 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node functional-140475 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node functional-140475 event: Registered Node functional-140475 in Controller
	
	
	==> dmesg <==
	[Sep 8 10:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014150] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.486895] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033827] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.725700] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.488700] kauditd_printk_skb: 36 callbacks suppressed
	[Sep 8 10:40] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Sep 8 11:30] hrtimer: interrupt took 33050655 ns
	[Sep 8 12:15] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [02ea4d507b87] <==
	{"level":"warn","ts":"2025-09-08T12:24:16.639838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:24:16.649417Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:24:16.688735Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:24:16.731763Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:24:16.765271Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:24:16.778536Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:24:16.910398Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39510","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-08T12:24:57.890978Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-08T12:24:57.891064Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-140475","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-09-08T12:24:57.891169Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-08T12:24:57.891444Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-08T12:25:04.896880Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-08T12:25:04.896943Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"warn","ts":"2025-09-08T12:25:04.897086Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-08T12:25:04.897172Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-08T12:25:04.897205Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-08T12:25:04.897262Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-09-08T12:25:04.897312Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-09-08T12:25:04.899523Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-08T12:25:04.899576Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-08T12:25:04.899589Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-08T12:25:04.902221Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-09-08T12:25:04.902306Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-08T12:25:04.902428Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-09-08T12:25:04.902507Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-140475","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [ca1f2eb2a56e] <==
	{"level":"warn","ts":"2025-09-08T12:25:17.631012Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:25:17.686593Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:25:17.692537Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:25:17.711751Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:25:17.736991Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:25:17.763646Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:25:17.841197Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:25:17.852797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:25:17.876013Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:25:17.907257Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:25:17.950331Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:25:17.980848Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:25:18.017529Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:25:18.061396Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:25:18.088202Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:25:18.122939Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:25:18.166770Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:25:18.220953Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:25:18.244197Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:25:18.257729Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:25:18.273735Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:25:18.337140Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42186","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-08T12:35:16.771183Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1157}
	{"level":"info","ts":"2025-09-08T12:35:16.795010Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1157,"took":"23.469328ms","hash":1845616592,"current-db-size-bytes":3301376,"current-db-size":"3.3 MB","current-db-size-in-use-bytes":1523712,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2025-09-08T12:35:16.795073Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1845616592,"revision":1157,"compact-revision":-1}
	
	
	==> kernel <==
	 12:35:54 up  2:18,  0 users,  load average: 1.57, 0.70, 0.85
	Linux functional-140475 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [264ed758e351] <==
	I0908 12:25:21.073251       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0908 12:25:22.889532       1 controller.go:667] quota admission added evaluator for: endpoints
	I0908 12:25:22.987473       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0908 12:25:23.086542       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0908 12:25:36.431394       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.101.63.105"}
	I0908 12:25:43.496219       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.100.95.33"}
	I0908 12:25:52.054365       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.99.5.255"}
	I0908 12:26:33.092750       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 12:26:44.800277       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 12:27:49.066443       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 12:27:58.039300       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 12:28:52.414335       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 12:29:25.650405       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 12:29:52.823851       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.98.39.142"}
	I0908 12:30:14.183727       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 12:30:47.334511       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 12:31:43.367778       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 12:32:05.544623       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 12:32:45.391793       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 12:33:29.726018       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 12:34:06.733632       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 12:34:30.956819       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 12:35:19.387854       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0908 12:35:24.827665       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 12:35:52.296396       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [207c0c3df856] <==
	
	
	==> kube-controller-manager [b5a5bff40e31] <==
	I0908 12:25:22.680039       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0908 12:25:22.680380       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0908 12:25:22.680386       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0908 12:25:22.680138       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0908 12:25:22.681216       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0908 12:25:22.682300       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0908 12:25:22.685677       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0908 12:25:22.686301       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0908 12:25:22.691882       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I0908 12:25:22.696116       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0908 12:25:22.698431       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0908 12:25:22.711012       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0908 12:25:22.714314       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0908 12:25:22.723484       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0908 12:25:22.729486       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0908 12:25:22.729518       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0908 12:25:22.729912       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0908 12:25:22.730035       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0908 12:25:22.729535       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0908 12:25:22.729765       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0908 12:25:22.731485       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0908 12:25:22.731720       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0908 12:25:22.734707       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0908 12:25:22.736252       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-140475"
	I0908 12:25:22.737513       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	
	
	==> kube-proxy [4ce992834f47] <==
	I0908 12:25:10.488602       1 server_linux.go:53] "Using iptables proxy"
	I0908 12:25:10.606414       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E0908 12:25:10.607244       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-140475&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	
	
	==> kube-proxy [ae4f0a663463] <==
	I0908 12:25:20.785069       1 server_linux.go:53] "Using iptables proxy"
	I0908 12:25:20.921442       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0908 12:25:21.025213       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0908 12:25:21.025245       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0908 12:25:21.025320       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0908 12:25:21.179249       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0908 12:25:21.182406       1 server_linux.go:132] "Using iptables Proxier"
	I0908 12:25:21.227302       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0908 12:25:21.227578       1 server.go:527] "Version info" version="v1.34.0"
	I0908 12:25:21.227593       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 12:25:21.229207       1 config.go:200] "Starting service config controller"
	I0908 12:25:21.229217       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0908 12:25:21.229243       1 config.go:106] "Starting endpoint slice config controller"
	I0908 12:25:21.229247       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0908 12:25:21.229258       1 config.go:403] "Starting serviceCIDR config controller"
	I0908 12:25:21.229262       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0908 12:25:21.237957       1 config.go:309] "Starting node config controller"
	I0908 12:25:21.237979       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0908 12:25:21.237987       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0908 12:25:21.330477       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0908 12:25:21.340787       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0908 12:25:21.340836       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [152b108a85c3] <==
	
	
	==> kube-scheduler [8e057b7f41de] <==
	I0908 12:25:18.036734       1 serving.go:386] Generated self-signed cert in-memory
	I0908 12:25:19.475887       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0908 12:25:19.475921       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 12:25:19.484805       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0908 12:25:19.485031       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I0908 12:25:19.485180       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 12:25:19.485277       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 12:25:19.485375       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0908 12:25:19.485465       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0908 12:25:19.486343       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0908 12:25:19.487832       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0908 12:25:19.585623       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0908 12:25:19.585742       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 12:25:19.585630       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Sep 08 12:34:43 functional-140475 kubelet[8797]: E0908 12:34:43.237739    8797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-t5bmg" podUID="3d104d49-ee46-419f-87a7-43b430053f2b"
	Sep 08 12:34:44 functional-140475 kubelet[8797]: E0908 12:34:44.238295    8797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-x22bh" podUID="81b0695c-6ef6-40b1-a20f-69e254ab61f4"
	Sep 08 12:34:55 functional-140475 kubelet[8797]: E0908 12:34:55.237900    8797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="ea595f80-e42a-47b4-bbbb-a8b2b4a4b3b1"
	Sep 08 12:34:57 functional-140475 kubelet[8797]: E0908 12:34:57.237805    8797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-t5bmg" podUID="3d104d49-ee46-419f-87a7-43b430053f2b"
	Sep 08 12:34:59 functional-140475 kubelet[8797]: E0908 12:34:59.237348    8797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-x22bh" podUID="81b0695c-6ef6-40b1-a20f-69e254ab61f4"
	Sep 08 12:35:08 functional-140475 kubelet[8797]: E0908 12:35:08.246019    8797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-t5bmg" podUID="3d104d49-ee46-419f-87a7-43b430053f2b"
	Sep 08 12:35:10 functional-140475 kubelet[8797]: E0908 12:35:10.237534    8797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="ea595f80-e42a-47b4-bbbb-a8b2b4a4b3b1"
	Sep 08 12:35:12 functional-140475 kubelet[8797]: E0908 12:35:12.237167    8797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-x22bh" podUID="81b0695c-6ef6-40b1-a20f-69e254ab61f4"
	Sep 08 12:35:19 functional-140475 kubelet[8797]: E0908 12:35:19.237759    8797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-t5bmg" podUID="3d104d49-ee46-419f-87a7-43b430053f2b"
	Sep 08 12:35:22 functional-140475 kubelet[8797]: E0908 12:35:22.237909    8797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="ea595f80-e42a-47b4-bbbb-a8b2b4a4b3b1"
	Sep 08 12:35:24 functional-140475 kubelet[8797]: E0908 12:35:24.240710    8797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-x22bh" podUID="81b0695c-6ef6-40b1-a20f-69e254ab61f4"
	Sep 08 12:35:31 functional-140475 kubelet[8797]: E0908 12:35:31.237639    8797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-t5bmg" podUID="3d104d49-ee46-419f-87a7-43b430053f2b"
	Sep 08 12:35:35 functional-140475 kubelet[8797]: E0908 12:35:35.237491    8797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="ea595f80-e42a-47b4-bbbb-a8b2b4a4b3b1"
	Sep 08 12:35:43 functional-140475 kubelet[8797]: E0908 12:35:43.237882    8797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-t5bmg" podUID="3d104d49-ee46-419f-87a7-43b430053f2b"
	Sep 08 12:35:49 functional-140475 kubelet[8797]: I0908 12:35:49.566036    8797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-node-75c85bcc94-x22bh" podStartSLOduration=14.01473664 podStartE2EDuration="5m57.566017664s" podCreationTimestamp="2025-09-08 12:29:52 +0000 UTC" firstStartedPulling="2025-09-08 12:29:53.246497601 +0000 UTC m=+279.229277530" lastFinishedPulling="2025-09-08 12:35:36.797778617 +0000 UTC m=+622.780558554" observedRunningTime="2025-09-08 12:35:37.323951254 +0000 UTC m=+623.306731215" watchObservedRunningTime="2025-09-08 12:35:49.566017664 +0000 UTC m=+635.548797601"
	Sep 08 12:35:49 functional-140475 kubelet[8797]: I0908 12:35:49.722420    8797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66vdl\" (UniqueName: \"kubernetes.io/projected/c71cd365-d63f-4c48-bf22-e8b326c5908e-kube-api-access-66vdl\") pod \"busybox-mount\" (UID: \"c71cd365-d63f-4c48-bf22-e8b326c5908e\") " pod="default/busybox-mount"
	Sep 08 12:35:49 functional-140475 kubelet[8797]: I0908 12:35:49.722482    8797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/c71cd365-d63f-4c48-bf22-e8b326c5908e-test-volume\") pod \"busybox-mount\" (UID: \"c71cd365-d63f-4c48-bf22-e8b326c5908e\") " pod="default/busybox-mount"
	Sep 08 12:35:50 functional-140475 kubelet[8797]: E0908 12:35:50.244796    8797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="ea595f80-e42a-47b4-bbbb-a8b2b4a4b3b1"
	Sep 08 12:35:53 functional-140475 kubelet[8797]: I0908 12:35:53.761871    8797 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66vdl\" (UniqueName: \"kubernetes.io/projected/c71cd365-d63f-4c48-bf22-e8b326c5908e-kube-api-access-66vdl\") pod \"c71cd365-d63f-4c48-bf22-e8b326c5908e\" (UID: \"c71cd365-d63f-4c48-bf22-e8b326c5908e\") "
	Sep 08 12:35:53 functional-140475 kubelet[8797]: I0908 12:35:53.762337    8797 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/c71cd365-d63f-4c48-bf22-e8b326c5908e-test-volume\") pod \"c71cd365-d63f-4c48-bf22-e8b326c5908e\" (UID: \"c71cd365-d63f-4c48-bf22-e8b326c5908e\") "
	Sep 08 12:35:53 functional-140475 kubelet[8797]: I0908 12:35:53.762463    8797 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c71cd365-d63f-4c48-bf22-e8b326c5908e-test-volume" (OuterVolumeSpecName: "test-volume") pod "c71cd365-d63f-4c48-bf22-e8b326c5908e" (UID: "c71cd365-d63f-4c48-bf22-e8b326c5908e"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Sep 08 12:35:53 functional-140475 kubelet[8797]: I0908 12:35:53.766864    8797 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c71cd365-d63f-4c48-bf22-e8b326c5908e-kube-api-access-66vdl" (OuterVolumeSpecName: "kube-api-access-66vdl") pod "c71cd365-d63f-4c48-bf22-e8b326c5908e" (UID: "c71cd365-d63f-4c48-bf22-e8b326c5908e"). InnerVolumeSpecName "kube-api-access-66vdl". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Sep 08 12:35:53 functional-140475 kubelet[8797]: I0908 12:35:53.863150    8797 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-66vdl\" (UniqueName: \"kubernetes.io/projected/c71cd365-d63f-4c48-bf22-e8b326c5908e-kube-api-access-66vdl\") on node \"functional-140475\" DevicePath \"\""
	Sep 08 12:35:53 functional-140475 kubelet[8797]: I0908 12:35:53.863192    8797 reconciler_common.go:299] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/c71cd365-d63f-4c48-bf22-e8b326c5908e-test-volume\") on node \"functional-140475\" DevicePath \"\""
	Sep 08 12:35:54 functional-140475 kubelet[8797]: I0908 12:35:54.513431    8797 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1729aa9858a3bbaafc3121ea7b119e5dcf06b6c86c709db72adbfd0167a211c"
	
	
	==> storage-provisioner [3e56d761c841] <==
	W0908 12:35:29.103642       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:35:31.107024       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:35:31.112985       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:35:33.115651       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:35:33.122731       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:35:35.126575       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:35:35.131710       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:35:37.134424       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:35:37.139207       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:35:39.143107       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:35:39.147685       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:35:41.150631       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:35:41.157455       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:35:43.160784       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:35:43.165630       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:35:45.180260       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:35:45.190434       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:35:47.193964       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:35:47.199162       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:35:49.202069       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:35:49.208050       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:35:51.211666       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:35:51.217323       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:35:53.220902       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:35:53.225524       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [6bd6d37f8ca1] <==
	I0908 12:24:39.805610       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0908 12:24:39.817707       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0908 12:24:39.818001       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0908 12:24:39.827051       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:24:43.281855       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:24:47.542472       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:24:51.141255       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:24:54.195301       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:24:57.217075       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:24:57.222388       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0908 12:24:57.222556       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0908 12:24:57.222806       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-140475_be3659dd-d2e5-49cd-8419-febf302bbd52!
	I0908 12:24:57.224364       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6f4e765b-be4a-4c1c-98b1-2642ed77f8a2", APIVersion:"v1", ResourceVersion:"613", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-140475_be3659dd-d2e5-49cd-8419-febf302bbd52 became leader
	W0908 12:24:57.228147       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:24:57.233779       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0908 12:24:57.323828       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-140475_be3659dd-d2e5-49cd-8419-febf302bbd52!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-140475 -n functional-140475
helpers_test.go:269: (dbg) Run:  kubectl --context functional-140475 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-connect-7d85dfc575-t5bmg sp-pod
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-140475 describe pod busybox-mount hello-node-connect-7d85dfc575-t5bmg sp-pod
helpers_test.go:290: (dbg) kubectl --context functional-140475 describe pod busybox-mount hello-node-connect-7d85dfc575-t5bmg sp-pod:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-140475/192.168.49.2
	Start Time:       Mon, 08 Sep 2025 12:35:49 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.12
	IPs:
	  IP:  10.244.0.12
	Containers:
	  mount-munger:
	    Container ID:  docker://70117b128644e2e4767f5fbbfdc02ceeecd480efe2fdcb53a147bb5f55a75ea6
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 08 Sep 2025 12:35:52 +0000
	      Finished:     Mon, 08 Sep 2025 12:35:52 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-66vdl (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-66vdl:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  6s    default-scheduler  Successfully assigned default/busybox-mount to functional-140475
	  Normal  Pulling    5s    kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     3s    kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.051s (2.051s including waiting). Image size: 3547125 bytes.
	  Normal  Created    3s    kubelet            Created container: mount-munger
	  Normal  Started    3s    kubelet            Started container mount-munger
	
	
	Name:             hello-node-connect-7d85dfc575-t5bmg
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-140475/192.168.49.2
	Start Time:       Mon, 08 Sep 2025 12:25:51 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:           10.244.0.10
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sks76 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-sks76:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  10m                 default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-t5bmg to functional-140475
	  Normal   Pulling    7m6s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m6s (x5 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     7m6s (x5 over 10m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    0s (x44 over 10m)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     0s (x44 over 10m)   kubelet            Error: ImagePullBackOff
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-140475/192.168.49.2
	Start Time:       Mon, 08 Sep 2025 12:25:49 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-h6sml (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-h6sml:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/sp-pod to functional-140475
	  Warning  Failed     8m34s (x3 over 9m50s)  kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    7m13s (x5 over 10m)    kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     7m13s (x2 over 10m)    kubelet            Failed to pull image "docker.io/nginx": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     7m13s (x5 over 10m)    kubelet            Error: ErrImagePull
	  Normal   BackOff    4m55s (x21 over 10m)   kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     4m55s (x21 over 10m)   kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.78s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (249.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [392a6faa-270d-49b3-8967-84c4e22fe60d] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003406535s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-140475 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-140475 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-140475 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-140475 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [ea595f80-e42a-47b4-bbbb-a8b2b4a4b3b1] Pending
helpers_test.go:352: "sp-pod" [ea595f80-e42a-47b4-bbbb-a8b2b4a4b3b1] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "default" "test=storage-provisioner" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
functional_test_pvc_test.go:140: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 4m0s: context deadline exceeded ****
functional_test_pvc_test.go:140: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-140475 -n functional-140475
functional_test_pvc_test.go:140: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2025-09-08 12:29:50.294535124 +0000 UTC m=+833.614610309
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-140475 describe po sp-pod -n default
functional_test_pvc_test.go:140: (dbg) kubectl --context functional-140475 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-140475/192.168.49.2
Start Time:       Mon, 08 Sep 2025 12:25:49 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.9
IPs:
IP:  10.244.0.9
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-h6sml (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-h6sml:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  4m                     default-scheduler  Successfully assigned default/sp-pod to functional-140475
Warning  Failed     2m29s (x3 over 3m45s)  kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    68s (x5 over 4m)       kubelet            Pulling image "docker.io/nginx"
Warning  Failed     68s (x2 over 4m)       kubelet            Failed to pull image "docker.io/nginx": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     68s (x5 over 4m)       kubelet            Error: ErrImagePull
Normal   BackOff    2s (x15 over 3m59s)    kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     2s (x15 over 3m59s)    kubelet            Error: ImagePullBackOff
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-140475 logs sp-pod -n default
functional_test_pvc_test.go:140: (dbg) Non-zero exit: kubectl --context functional-140475 logs sp-pod -n default: exit status 1 (103.916263ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_pvc_test.go:140: kubectl --context functional-140475 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:141: failed waiting for pvctest pod : test=storage-provisioner within 4m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-140475
helpers_test.go:243: (dbg) docker inspect functional-140475:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f779030ae61f3535d0f5115329c87b778b458ec5edcb503f2cd898753ed14341",
	        "Created": "2025-09-08T12:22:33.259116131Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 300594,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-08T12:22:33.335511126Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1a6e5b410fd9226cf2434621073598c7c01bccc994a53260ab0a0d834a0f1815",
	        "ResolvConfPath": "/var/lib/docker/containers/f779030ae61f3535d0f5115329c87b778b458ec5edcb503f2cd898753ed14341/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f779030ae61f3535d0f5115329c87b778b458ec5edcb503f2cd898753ed14341/hostname",
	        "HostsPath": "/var/lib/docker/containers/f779030ae61f3535d0f5115329c87b778b458ec5edcb503f2cd898753ed14341/hosts",
	        "LogPath": "/var/lib/docker/containers/f779030ae61f3535d0f5115329c87b778b458ec5edcb503f2cd898753ed14341/f779030ae61f3535d0f5115329c87b778b458ec5edcb503f2cd898753ed14341-json.log",
	        "Name": "/functional-140475",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-140475:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-140475",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f779030ae61f3535d0f5115329c87b778b458ec5edcb503f2cd898753ed14341",
	                "LowerDir": "/var/lib/docker/overlay2/d312c914cb3c70debf4b39ba5376f977a50ea3960281d7f5c74cdcd5b6aa7804-init/diff:/var/lib/docker/overlay2/4e9e34582c8fac27b8acdffb5ccaf9d8b30c2dae25a1b3b2b79fa116bc7d16cb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d312c914cb3c70debf4b39ba5376f977a50ea3960281d7f5c74cdcd5b6aa7804/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d312c914cb3c70debf4b39ba5376f977a50ea3960281d7f5c74cdcd5b6aa7804/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d312c914cb3c70debf4b39ba5376f977a50ea3960281d7f5c74cdcd5b6aa7804/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-140475",
	                "Source": "/var/lib/docker/volumes/functional-140475/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-140475",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-140475",
	                "name.minikube.sigs.k8s.io": "functional-140475",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3ce5a484316b053698273bb74a7bfdcf5e2405e0d4a8e758d9e2edbdb00445ff",
	            "SandboxKey": "/var/run/docker/netns/3ce5a484316b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33143"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33144"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33147"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33145"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33146"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-140475": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "de:bd:9a:64:d3:4a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c192c78e034e0a71a0e148767e9b0ec7ae14d2f5e09e1cfa298441ea22bbe0e5",
	                    "EndpointID": "b362f57131db43ce06461506a6aa968ca551222e3cb2b1e2a1609968c677929a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-140475",
	                        "f779030ae61f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-140475 -n functional-140475
helpers_test.go:252: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-140475 logs -n 25: (1.201475716s)
helpers_test.go:260: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                            ARGS                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-140475 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                    │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:24 UTC │ 08 Sep 25 12:24 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                           │ minikube          │ jenkins │ v1.36.0 │ 08 Sep 25 12:24 UTC │ 08 Sep 25 12:24 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                        │ minikube          │ jenkins │ v1.36.0 │ 08 Sep 25 12:24 UTC │ 08 Sep 25 12:24 UTC │
	│ kubectl │ functional-140475 kubectl -- --context functional-140475 get pods                                                          │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:24 UTC │ 08 Sep 25 12:24 UTC │
	│ start   │ -p functional-140475 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                   │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:24 UTC │ 08 Sep 25 12:25 UTC │
	│ service │ invalid-svc -p functional-140475                                                                                           │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:25 UTC │                     │
	│ cp      │ functional-140475 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                         │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:25 UTC │ 08 Sep 25 12:25 UTC │
	│ config  │ functional-140475 config unset cpus                                                                                        │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:25 UTC │ 08 Sep 25 12:25 UTC │
	│ config  │ functional-140475 config get cpus                                                                                          │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:25 UTC │                     │
	│ config  │ functional-140475 config set cpus 2                                                                                        │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:25 UTC │ 08 Sep 25 12:25 UTC │
	│ config  │ functional-140475 config get cpus                                                                                          │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:25 UTC │ 08 Sep 25 12:25 UTC │
	│ config  │ functional-140475 config unset cpus                                                                                        │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:25 UTC │ 08 Sep 25 12:25 UTC │
	│ ssh     │ functional-140475 ssh -n functional-140475 sudo cat /home/docker/cp-test.txt                                               │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:25 UTC │ 08 Sep 25 12:25 UTC │
	│ config  │ functional-140475 config get cpus                                                                                          │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:25 UTC │                     │
	│ ssh     │ functional-140475 ssh echo hello                                                                                           │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:25 UTC │ 08 Sep 25 12:25 UTC │
	│ cp      │ functional-140475 cp functional-140475:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2204074601/001/cp-test.txt │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:25 UTC │ 08 Sep 25 12:25 UTC │
	│ ssh     │ functional-140475 ssh cat /etc/hostname                                                                                    │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:25 UTC │ 08 Sep 25 12:25 UTC │
	│ ssh     │ functional-140475 ssh -n functional-140475 sudo cat /home/docker/cp-test.txt                                               │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:25 UTC │ 08 Sep 25 12:25 UTC │
	│ tunnel  │ functional-140475 tunnel --alsologtostderr                                                                                 │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:25 UTC │                     │
	│ tunnel  │ functional-140475 tunnel --alsologtostderr                                                                                 │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:25 UTC │                     │
	│ cp      │ functional-140475 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                  │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:25 UTC │ 08 Sep 25 12:25 UTC │
	│ ssh     │ functional-140475 ssh -n functional-140475 sudo cat /tmp/does/not/exist/cp-test.txt                                        │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:25 UTC │ 08 Sep 25 12:25 UTC │
	│ tunnel  │ functional-140475 tunnel --alsologtostderr                                                                                 │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:25 UTC │                     │
	│ addons  │ functional-140475 addons list                                                                                              │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:25 UTC │ 08 Sep 25 12:25 UTC │
	│ addons  │ functional-140475 addons list -o json                                                                                      │ functional-140475 │ jenkins │ v1.36.0 │ 08 Sep 25 12:25 UTC │ 08 Sep 25 12:25 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 12:24:38
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 12:24:38.805779  307961 out.go:360] Setting OutFile to fd 1 ...
	I0908 12:24:38.805904  307961 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:24:38.805908  307961 out.go:374] Setting ErrFile to fd 2...
	I0908 12:24:38.805912  307961 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:24:38.806275  307961 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-272936/.minikube/bin
	I0908 12:24:38.807269  307961 out.go:368] Setting JSON to false
	I0908 12:24:38.808261  307961 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7629,"bootTime":1757326650,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0908 12:24:38.808322  307961 start.go:140] virtualization:  
	I0908 12:24:38.813681  307961 out.go:179] * [functional-140475] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	I0908 12:24:38.816646  307961 out.go:179]   - MINIKUBE_LOCATION=21508
	I0908 12:24:38.816757  307961 notify.go:220] Checking for updates...
	I0908 12:24:38.822592  307961 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 12:24:38.825555  307961 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21508-272936/kubeconfig
	I0908 12:24:38.828314  307961 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-272936/.minikube
	I0908 12:24:38.831112  307961 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0908 12:24:38.834107  307961 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 12:24:38.837571  307961 config.go:182] Loaded profile config "functional-140475": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 12:24:38.837657  307961 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 12:24:38.870196  307961 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 12:24:38.870327  307961 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 12:24:38.946560  307961 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-09-08 12:24:38.936789289 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 12:24:38.946672  307961 docker.go:318] overlay module found
	I0908 12:24:38.949850  307961 out.go:179] * Using the docker driver based on existing profile
	I0908 12:24:38.952858  307961 start.go:304] selected driver: docker
	I0908 12:24:38.952869  307961 start.go:918] validating driver "docker" against &{Name:functional-140475 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-140475 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 12:24:38.952972  307961 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 12:24:38.953074  307961 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 12:24:39.015765  307961 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-09-08 12:24:39.005891577 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 12:24:39.022160  307961 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 12:24:39.022186  307961 cni.go:84] Creating CNI manager for ""
	I0908 12:24:39.022255  307961 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0908 12:24:39.022319  307961 start.go:348] cluster config:
	{Name:functional-140475 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-140475 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 12:24:39.025507  307961 out.go:179] * Starting "functional-140475" primary control-plane node in "functional-140475" cluster
	I0908 12:24:39.028338  307961 cache.go:123] Beginning downloading kic base image for docker with docker
	I0908 12:24:39.031177  307961 out.go:179] * Pulling base image v0.0.47-1756980985-21488 ...
	I0908 12:24:39.033923  307961 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0908 12:24:39.033970  307961 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21508-272936/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-arm64.tar.lz4
	I0908 12:24:39.033978  307961 cache.go:58] Caching tarball of preloaded images
	I0908 12:24:39.034007  307961 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon
	I0908 12:24:39.034063  307961 preload.go:172] Found /home/jenkins/minikube-integration/21508-272936/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0908 12:24:39.034072  307961 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0908 12:24:39.034187  307961 profile.go:143] Saving config to /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/functional-140475/config.json ...
	I0908 12:24:39.053977  307961 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon, skipping pull
	I0908 12:24:39.053988  307961 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 exists in daemon, skipping load
	I0908 12:24:39.054001  307961 cache.go:232] Successfully downloaded all kic artifacts
	I0908 12:24:39.054022  307961 start.go:360] acquireMachinesLock for functional-140475: {Name:mk6b5e0f12e93a7e43a3198f394e5ecd19765868 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 12:24:39.054075  307961 start.go:364] duration metric: took 37.038µs to acquireMachinesLock for "functional-140475"
	I0908 12:24:39.054093  307961 start.go:96] Skipping create...Using existing machine configuration
	I0908 12:24:39.054104  307961 fix.go:54] fixHost starting: 
	I0908 12:24:39.054404  307961 cli_runner.go:164] Run: docker container inspect functional-140475 --format={{.State.Status}}
	I0908 12:24:39.070636  307961 fix.go:112] recreateIfNeeded on functional-140475: state=Running err=<nil>
	W0908 12:24:39.070655  307961 fix.go:138] unexpected machine state, will restart: <nil>
	I0908 12:24:39.073856  307961 out.go:252] * Updating the running docker "functional-140475" container ...
	I0908 12:24:39.073883  307961 machine.go:93] provisionDockerMachine start ...
	I0908 12:24:39.073972  307961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-140475
	I0908 12:24:39.095706  307961 main.go:141] libmachine: Using SSH client type: native
	I0908 12:24:39.096021  307961 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I0908 12:24:39.096028  307961 main.go:141] libmachine: About to run SSH command:
	hostname
	I0908 12:24:39.219506  307961 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-140475
	
	I0908 12:24:39.219522  307961 ubuntu.go:182] provisioning hostname "functional-140475"
	I0908 12:24:39.219593  307961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-140475
	I0908 12:24:39.237514  307961 main.go:141] libmachine: Using SSH client type: native
	I0908 12:24:39.237826  307961 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I0908 12:24:39.237836  307961 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-140475 && echo "functional-140475" | sudo tee /etc/hostname
	I0908 12:24:39.376149  307961 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-140475
	
	I0908 12:24:39.376231  307961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-140475
	I0908 12:24:39.394454  307961 main.go:141] libmachine: Using SSH client type: native
	I0908 12:24:39.394772  307961 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I0908 12:24:39.394787  307961 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-140475' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-140475/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-140475' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0908 12:24:39.526330  307961 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0908 12:24:39.526364  307961 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21508-272936/.minikube CaCertPath:/home/jenkins/minikube-integration/21508-272936/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21508-272936/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21508-272936/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21508-272936/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21508-272936/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21508-272936/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21508-272936/.minikube}
	I0908 12:24:39.526386  307961 ubuntu.go:190] setting up certificates
	I0908 12:24:39.526395  307961 provision.go:84] configureAuth start
	I0908 12:24:39.526473  307961 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-140475
	I0908 12:24:39.544256  307961 provision.go:143] copyHostCerts
	I0908 12:24:39.544312  307961 exec_runner.go:144] found /home/jenkins/minikube-integration/21508-272936/.minikube/ca.pem, removing ...
	I0908 12:24:39.544333  307961 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21508-272936/.minikube/ca.pem
	I0908 12:24:39.544396  307961 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21508-272936/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21508-272936/.minikube/ca.pem (1078 bytes)
	I0908 12:24:39.544485  307961 exec_runner.go:144] found /home/jenkins/minikube-integration/21508-272936/.minikube/cert.pem, removing ...
	I0908 12:24:39.544489  307961 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21508-272936/.minikube/cert.pem
	I0908 12:24:39.544514  307961 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21508-272936/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21508-272936/.minikube/cert.pem (1123 bytes)
	I0908 12:24:39.544562  307961 exec_runner.go:144] found /home/jenkins/minikube-integration/21508-272936/.minikube/key.pem, removing ...
	I0908 12:24:39.544565  307961 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21508-272936/.minikube/key.pem
	I0908 12:24:39.544587  307961 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21508-272936/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21508-272936/.minikube/key.pem (1679 bytes)
	I0908 12:24:39.544630  307961 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21508-272936/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21508-272936/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21508-272936/.minikube/certs/ca-key.pem org=jenkins.functional-140475 san=[127.0.0.1 192.168.49.2 functional-140475 localhost minikube]
	I0908 12:24:40.206452  307961 provision.go:177] copyRemoteCerts
	I0908 12:24:40.206513  307961 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0908 12:24:40.206561  307961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-140475
	I0908 12:24:40.224526  307961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21508-272936/.minikube/machines/functional-140475/id_rsa Username:docker}
	I0908 12:24:40.317369  307961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-272936/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0908 12:24:40.343589  307961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-272936/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0908 12:24:40.374905  307961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-272936/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0908 12:24:40.404704  307961 provision.go:87] duration metric: took 878.296678ms to configureAuth
	I0908 12:24:40.404722  307961 ubuntu.go:206] setting minikube options for container-runtime
	I0908 12:24:40.404930  307961 config.go:182] Loaded profile config "functional-140475": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 12:24:40.404987  307961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-140475
	I0908 12:24:40.427465  307961 main.go:141] libmachine: Using SSH client type: native
	I0908 12:24:40.427769  307961 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I0908 12:24:40.427777  307961 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0908 12:24:40.553335  307961 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0908 12:24:40.553346  307961 ubuntu.go:71] root file system type: overlay
	I0908 12:24:40.553467  307961 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0908 12:24:40.553534  307961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-140475
	I0908 12:24:40.572469  307961 main.go:141] libmachine: Using SSH client type: native
	I0908 12:24:40.572772  307961 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I0908 12:24:40.572847  307961 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0908 12:24:40.708472  307961 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0908 12:24:40.708551  307961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-140475
	I0908 12:24:40.726963  307961 main.go:141] libmachine: Using SSH client type: native
	I0908 12:24:40.727265  307961 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I0908 12:24:40.727280  307961 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0908 12:24:40.857687  307961 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0908 12:24:40.857700  307961 machine.go:96] duration metric: took 1.783809767s to provisionDockerMachine
	I0908 12:24:40.857710  307961 start.go:293] postStartSetup for "functional-140475" (driver="docker")
	I0908 12:24:40.857720  307961 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0908 12:24:40.857815  307961 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0908 12:24:40.857856  307961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-140475
	I0908 12:24:40.876555  307961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21508-272936/.minikube/machines/functional-140475/id_rsa Username:docker}
	I0908 12:24:40.969373  307961 ssh_runner.go:195] Run: cat /etc/os-release
	I0908 12:24:40.972964  307961 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0908 12:24:40.972987  307961 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0908 12:24:40.972996  307961 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0908 12:24:40.973002  307961 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0908 12:24:40.973011  307961 filesync.go:126] Scanning /home/jenkins/minikube-integration/21508-272936/.minikube/addons for local assets ...
	I0908 12:24:40.973067  307961 filesync.go:126] Scanning /home/jenkins/minikube-integration/21508-272936/.minikube/files for local assets ...
	I0908 12:24:40.973153  307961 filesync.go:149] local asset: /home/jenkins/minikube-integration/21508-272936/.minikube/files/etc/ssl/certs/2747962.pem -> 2747962.pem in /etc/ssl/certs
	I0908 12:24:40.973232  307961 filesync.go:149] local asset: /home/jenkins/minikube-integration/21508-272936/.minikube/files/etc/test/nested/copy/274796/hosts -> hosts in /etc/test/nested/copy/274796
	I0908 12:24:40.973277  307961 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/274796
	I0908 12:24:40.982259  307961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-272936/.minikube/files/etc/ssl/certs/2747962.pem --> /etc/ssl/certs/2747962.pem (1708 bytes)
	I0908 12:24:41.008450  307961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-272936/.minikube/files/etc/test/nested/copy/274796/hosts --> /etc/test/nested/copy/274796/hosts (40 bytes)
	I0908 12:24:41.042546  307961 start.go:296] duration metric: took 184.820828ms for postStartSetup
	I0908 12:24:41.042641  307961 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 12:24:41.042685  307961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-140475
	I0908 12:24:41.060425  307961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21508-272936/.minikube/machines/functional-140475/id_rsa Username:docker}
	I0908 12:24:41.149190  307961 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0908 12:24:41.154107  307961 fix.go:56] duration metric: took 2.100003075s for fixHost
	I0908 12:24:41.154123  307961 start.go:83] releasing machines lock for "functional-140475", held for 2.100040803s
	I0908 12:24:41.154187  307961 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-140475
	I0908 12:24:41.170748  307961 ssh_runner.go:195] Run: cat /version.json
	I0908 12:24:41.170807  307961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-140475
	I0908 12:24:41.171122  307961 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0908 12:24:41.171175  307961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-140475
	I0908 12:24:41.200909  307961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21508-272936/.minikube/machines/functional-140475/id_rsa Username:docker}
	I0908 12:24:41.201658  307961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21508-272936/.minikube/machines/functional-140475/id_rsa Username:docker}
	I0908 12:24:41.409551  307961 ssh_runner.go:195] Run: systemctl --version
	I0908 12:24:41.413819  307961 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0908 12:24:41.418199  307961 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0908 12:24:41.435960  307961 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0908 12:24:41.436036  307961 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 12:24:41.445590  307961 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0908 12:24:41.445610  307961 start.go:495] detecting cgroup driver to use...
	I0908 12:24:41.445654  307961 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0908 12:24:41.445751  307961 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0908 12:24:41.462870  307961 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0908 12:24:41.475459  307961 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0908 12:24:41.486009  307961 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0908 12:24:41.486079  307961 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0908 12:24:41.496429  307961 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0908 12:24:41.509909  307961 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0908 12:24:41.525577  307961 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0908 12:24:41.537217  307961 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0908 12:24:41.548235  307961 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0908 12:24:41.559195  307961 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0908 12:24:41.571795  307961 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0908 12:24:41.583341  307961 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0908 12:24:41.592890  307961 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0908 12:24:41.602220  307961 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 12:24:41.721513  307961 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0908 12:24:41.953565  307961 start.go:495] detecting cgroup driver to use...
	I0908 12:24:41.953605  307961 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0908 12:24:41.953653  307961 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0908 12:24:41.970094  307961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0908 12:24:41.984040  307961 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0908 12:24:42.003881  307961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0908 12:24:42.028519  307961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0908 12:24:42.042258  307961 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0908 12:24:42.064216  307961 ssh_runner.go:195] Run: which cri-dockerd
	I0908 12:24:42.068678  307961 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0908 12:24:42.079484  307961 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0908 12:24:42.104657  307961 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0908 12:24:42.224934  307961 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0908 12:24:42.345867  307961 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I0908 12:24:42.345965  307961 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0908 12:24:42.367785  307961 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0908 12:24:42.379947  307961 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 12:24:42.490097  307961 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0908 12:25:08.449575  307961 ssh_runner.go:235] Completed: sudo systemctl restart docker: (25.959453779s)
	I0908 12:25:08.449637  307961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0908 12:25:08.463783  307961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0908 12:25:08.477783  307961 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0908 12:25:08.502065  307961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0908 12:25:08.515472  307961 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0908 12:25:08.611793  307961 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0908 12:25:08.706944  307961 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 12:25:08.807621  307961 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0908 12:25:08.822702  307961 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0908 12:25:08.834746  307961 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 12:25:08.927715  307961 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0908 12:25:09.006840  307961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0908 12:25:09.022281  307961 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0908 12:25:09.022341  307961 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0908 12:25:09.026672  307961 start.go:563] Will wait 60s for crictl version
	I0908 12:25:09.026729  307961 ssh_runner.go:195] Run: which crictl
	I0908 12:25:09.030168  307961 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0908 12:25:09.069408  307961 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0908 12:25:09.069468  307961 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0908 12:25:09.091988  307961 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0908 12:25:09.119701  307961 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0908 12:25:09.119784  307961 cli_runner.go:164] Run: docker network inspect functional-140475 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0908 12:25:09.135966  307961 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0908 12:25:09.142804  307961 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0908 12:25:09.145841  307961 kubeadm.go:875] updating cluster {Name:functional-140475 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-140475 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0908 12:25:09.145965  307961 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0908 12:25:09.146046  307961 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0908 12:25:09.165618  307961 docker.go:691] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-140475
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I0908 12:25:09.165633  307961 docker.go:621] Images already preloaded, skipping extraction
	I0908 12:25:09.165703  307961 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0908 12:25:09.184547  307961 docker.go:691] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-140475
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I0908 12:25:09.184562  307961 cache_images.go:85] Images are preloaded, skipping loading
	I0908 12:25:09.184570  307961 kubeadm.go:926] updating node { 192.168.49.2 8441 v1.34.0 docker true true} ...
	I0908 12:25:09.184676  307961 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-140475 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:functional-140475 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0908 12:25:09.184743  307961 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0908 12:25:09.234162  307961 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0908 12:25:09.234182  307961 cni.go:84] Creating CNI manager for ""
	I0908 12:25:09.234208  307961 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0908 12:25:09.234215  307961 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0908 12:25:09.234234  307961 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-140475 NodeName:functional-140475 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:
map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0908 12:25:09.234346  307961 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-140475"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0908 12:25:09.234409  307961 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0908 12:25:09.243152  307961 binaries.go:44] Found k8s binaries, skipping transfer
	I0908 12:25:09.243228  307961 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0908 12:25:09.252021  307961 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0908 12:25:09.271146  307961 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0908 12:25:09.289010  307961 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2068 bytes)
	I0908 12:25:09.307402  307961 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0908 12:25:09.310777  307961 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 12:25:09.402009  307961 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 12:25:09.413396  307961 certs.go:68] Setting up /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/functional-140475 for IP: 192.168.49.2
	I0908 12:25:09.413407  307961 certs.go:194] generating shared ca certs ...
	I0908 12:25:09.413422  307961 certs.go:226] acquiring lock for ca certs: {Name:mkab0eab768f036514950b55081b45acf0f9ba87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:25:09.413551  307961 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21508-272936/.minikube/ca.key
	I0908 12:25:09.413586  307961 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21508-272936/.minikube/proxy-client-ca.key
	I0908 12:25:09.413592  307961 certs.go:256] generating profile certs ...
	I0908 12:25:09.413672  307961 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/functional-140475/client.key
	I0908 12:25:09.413719  307961 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/functional-140475/apiserver.key.e6897943
	I0908 12:25:09.413752  307961 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/functional-140475/proxy-client.key
	I0908 12:25:09.413864  307961 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-272936/.minikube/certs/274796.pem (1338 bytes)
	W0908 12:25:09.413888  307961 certs.go:480] ignoring /home/jenkins/minikube-integration/21508-272936/.minikube/certs/274796_empty.pem, impossibly tiny 0 bytes
	I0908 12:25:09.413895  307961 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-272936/.minikube/certs/ca-key.pem (1675 bytes)
	I0908 12:25:09.413920  307961 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-272936/.minikube/certs/ca.pem (1078 bytes)
	I0908 12:25:09.413943  307961 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-272936/.minikube/certs/cert.pem (1123 bytes)
	I0908 12:25:09.413965  307961 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-272936/.minikube/certs/key.pem (1679 bytes)
	I0908 12:25:09.414005  307961 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-272936/.minikube/files/etc/ssl/certs/2747962.pem (1708 bytes)
	I0908 12:25:09.414669  307961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-272936/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0908 12:25:09.438677  307961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-272936/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0908 12:25:09.462973  307961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-272936/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0908 12:25:09.487126  307961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-272936/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0908 12:25:09.523201  307961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/functional-140475/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0908 12:25:09.564576  307961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/functional-140475/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0908 12:25:09.609080  307961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/functional-140475/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0908 12:25:09.660398  307961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/functional-140475/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0908 12:25:09.722557  307961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-272936/.minikube/files/etc/ssl/certs/2747962.pem --> /usr/share/ca-certificates/2747962.pem (1708 bytes)
	I0908 12:25:09.795191  307961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-272936/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0908 12:25:09.852043  307961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-272936/.minikube/certs/274796.pem --> /usr/share/ca-certificates/274796.pem (1338 bytes)
	I0908 12:25:09.929325  307961 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0908 12:25:09.956294  307961 ssh_runner.go:195] Run: openssl version
	I0908 12:25:09.964741  307961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2747962.pem && ln -fs /usr/share/ca-certificates/2747962.pem /etc/ssl/certs/2747962.pem"
	I0908 12:25:09.981038  307961 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2747962.pem
	I0908 12:25:09.987492  307961 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  8 12:22 /usr/share/ca-certificates/2747962.pem
	I0908 12:25:09.987560  307961 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2747962.pem
	I0908 12:25:10.012655  307961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2747962.pem /etc/ssl/certs/3ec20f2e.0"
	I0908 12:25:10.042426  307961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0908 12:25:10.058375  307961 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0908 12:25:10.062540  307961 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  8 12:16 /usr/share/ca-certificates/minikubeCA.pem
	I0908 12:25:10.062596  307961 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0908 12:25:10.072176  307961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0908 12:25:10.087355  307961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/274796.pem && ln -fs /usr/share/ca-certificates/274796.pem /etc/ssl/certs/274796.pem"
	I0908 12:25:10.103595  307961 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/274796.pem
	I0908 12:25:10.108535  307961 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  8 12:22 /usr/share/ca-certificates/274796.pem
	I0908 12:25:10.108593  307961 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/274796.pem
	I0908 12:25:10.119367  307961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/274796.pem /etc/ssl/certs/51391683.0"
	I0908 12:25:10.130528  307961 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0908 12:25:10.137359  307961 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0908 12:25:10.166539  307961 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0908 12:25:10.174771  307961 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0908 12:25:10.183637  307961 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0908 12:25:10.194714  307961 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0908 12:25:10.202219  307961 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0908 12:25:10.213944  307961 kubeadm.go:392] StartCluster: {Name:functional-140475 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-140475 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 12:25:10.214072  307961 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0908 12:25:10.257105  307961 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0908 12:25:10.270314  307961 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0908 12:25:10.270323  307961 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0908 12:25:10.270373  307961 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0908 12:25:10.287316  307961 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0908 12:25:10.287844  307961 kubeconfig.go:125] found "functional-140475" server: "https://192.168.49.2:8441"
	I0908 12:25:10.289121  307961 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0908 12:25:10.302693  307961 kubeadm.go:636] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-09-08 12:22:43.060573313 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-09-08 12:25:09.302462360 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I0908 12:25:10.303352  307961 kubeadm.go:1152] stopping kube-system containers ...
	I0908 12:25:10.303417  307961 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0908 12:25:10.356603  307961 docker.go:484] Stopping containers: [4ce992834f47 ddb7ba696cfb 0101077a9928 490cadb2a355 ae7efbf232ca 09a635f61e28 e1c898d52b18 616b3f6374b7 6bd6d37f8ca1 e96a79d42555 44659d7c7d60 02ea4d507b87 30445a15d3f7 b4bbd0c7e775 5a5f8092cbe5 624d0c2700e9 b0328378d329 fd9d24c6886a 81682e275a9e f043527a68f4 5d4262a965e8 84d41cb3fb65 52d38dc4b252 cde196e9b1bb 031d8bd724fc b5b728e886d2 a09328718e6c 1bfb1c6555aa 125b6fef9ca7 55602dd893bc 2cdf0389c305]
	I0908 12:25:10.356683  307961 ssh_runner.go:195] Run: docker stop 4ce992834f47 ddb7ba696cfb 0101077a9928 490cadb2a355 ae7efbf232ca 09a635f61e28 e1c898d52b18 616b3f6374b7 6bd6d37f8ca1 e96a79d42555 44659d7c7d60 02ea4d507b87 30445a15d3f7 b4bbd0c7e775 5a5f8092cbe5 624d0c2700e9 b0328378d329 fd9d24c6886a 81682e275a9e f043527a68f4 5d4262a965e8 84d41cb3fb65 52d38dc4b252 cde196e9b1bb 031d8bd724fc b5b728e886d2 a09328718e6c 1bfb1c6555aa 125b6fef9ca7 55602dd893bc 2cdf0389c305
	I0908 12:25:10.845271  307961 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0908 12:25:10.970034  307961 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0908 12:25:10.986122  307961 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5631 Sep  8 12:22 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Sep  8 12:22 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1972 Sep  8 12:22 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Sep  8 12:22 /etc/kubernetes/scheduler.conf
	
	I0908 12:25:10.986188  307961 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0908 12:25:10.999197  307961 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0908 12:25:11.014257  307961 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0908 12:25:11.014316  307961 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0908 12:25:11.025509  307961 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0908 12:25:11.035696  307961 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0908 12:25:11.035751  307961 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0908 12:25:11.047458  307961 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0908 12:25:11.064869  307961 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0908 12:25:11.064936  307961 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0908 12:25:11.077313  307961 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0908 12:25:11.090284  307961 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0908 12:25:11.149610  307961 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0908 12:25:13.846596  307961 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.696960077s)
	I0908 12:25:13.846621  307961 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0908 12:25:14.023809  307961 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0908 12:25:14.084462  307961 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0908 12:25:14.157768  307961 api_server.go:52] waiting for apiserver process to appear ...
	I0908 12:25:14.157834  307961 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 12:25:14.658730  307961 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 12:25:15.158925  307961 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 12:25:15.657954  307961 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 12:25:15.684517  307961 api_server.go:72] duration metric: took 1.526754153s to wait for apiserver process to appear ...
	I0908 12:25:15.684531  307961 api_server.go:88] waiting for apiserver healthz status ...
	I0908 12:25:15.684550  307961 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0908 12:25:19.225894  307961 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0908 12:25:19.225912  307961 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0908 12:25:19.225924  307961 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0908 12:25:19.282866  307961 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0908 12:25:19.282883  307961 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0908 12:25:19.685226  307961 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0908 12:25:19.694476  307961 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0908 12:25:19.694491  307961 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0908 12:25:20.184632  307961 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0908 12:25:20.199934  307961 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0908 12:25:20.199950  307961 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0908 12:25:20.685311  307961 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0908 12:25:20.694963  307961 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I0908 12:25:20.708729  307961 api_server.go:141] control plane version: v1.34.0
	I0908 12:25:20.708745  307961 api_server.go:131] duration metric: took 5.024208217s to wait for apiserver health ...
	I0908 12:25:20.708753  307961 cni.go:84] Creating CNI manager for ""
	I0908 12:25:20.708763  307961 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0908 12:25:20.712167  307961 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I0908 12:25:20.715042  307961 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0908 12:25:20.742203  307961 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0908 12:25:20.787329  307961 system_pods.go:43] waiting for kube-system pods to appear ...
	I0908 12:25:20.791721  307961 system_pods.go:59] 7 kube-system pods found
	I0908 12:25:20.791752  307961 system_pods.go:61] "coredns-66bc5c9577-p79xg" [4e573d3c-eae9-4bd0-9f33-5b8c667ad3d3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 12:25:20.791760  307961 system_pods.go:61] "etcd-functional-140475" [19b5cd07-26aa-4184-9fa9-fa1945b3f3b4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0908 12:25:20.791767  307961 system_pods.go:61] "kube-apiserver-functional-140475" [1331084f-dab5-460b-b092-7b94d68ddd6b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0908 12:25:20.791773  307961 system_pods.go:61] "kube-controller-manager-functional-140475" [a426e33d-fe9b-4438-a5e6-ee5ea4f03b5f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0908 12:25:20.791778  307961 system_pods.go:61] "kube-proxy-mtw87" [5acd2352-8d2e-41fb-9d3a-e5dec0168fdc] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0908 12:25:20.791784  307961 system_pods.go:61] "kube-scheduler-functional-140475" [ebd55573-2475-4e76-ac63-a49d1144d78a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0908 12:25:20.791789  307961 system_pods.go:61] "storage-provisioner" [392a6faa-270d-49b3-8967-84c4e22fe60d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0908 12:25:20.791794  307961 system_pods.go:74] duration metric: took 4.455237ms to wait for pod list to return data ...
	I0908 12:25:20.791801  307961 node_conditions.go:102] verifying NodePressure condition ...
	I0908 12:25:20.794649  307961 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0908 12:25:20.794668  307961 node_conditions.go:123] node cpu capacity is 2
	I0908 12:25:20.794688  307961 node_conditions.go:105] duration metric: took 2.883386ms to run NodePressure ...
	I0908 12:25:20.794703  307961 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0908 12:25:21.097788  307961 kubeadm.go:720] waiting for restarted kubelet to initialise ...
	I0908 12:25:21.101921  307961 kubeadm.go:735] kubelet initialised
	I0908 12:25:21.101942  307961 kubeadm.go:736] duration metric: took 4.122294ms waiting for restarted kubelet to initialise ...
	I0908 12:25:21.101963  307961 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0908 12:25:21.111200  307961 ops.go:34] apiserver oom_adj: -16
	I0908 12:25:21.111219  307961 kubeadm.go:593] duration metric: took 10.840884442s to restartPrimaryControlPlane
	I0908 12:25:21.111232  307961 kubeadm.go:394] duration metric: took 10.897292619s to StartCluster
	I0908 12:25:21.111247  307961 settings.go:142] acquiring lock: {Name:mk841a706da4c4a4fb8ce124add06cca32768f30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:25:21.111322  307961 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21508-272936/kubeconfig
	I0908 12:25:21.112165  307961 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-272936/kubeconfig: {Name:mk688057e1b67a1163f63dcfb98bef59b0d5e043 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:25:21.112699  307961 config.go:182] Loaded profile config "functional-140475": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 12:25:21.112487  307961 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0908 12:25:21.112800  307961 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0908 12:25:21.113081  307961 addons.go:69] Setting storage-provisioner=true in profile "functional-140475"
	I0908 12:25:21.113096  307961 addons.go:238] Setting addon storage-provisioner=true in "functional-140475"
	W0908 12:25:21.113101  307961 addons.go:247] addon storage-provisioner should already be in state true
	I0908 12:25:21.113126  307961 host.go:66] Checking if "functional-140475" exists ...
	I0908 12:25:21.113184  307961 addons.go:69] Setting default-storageclass=true in profile "functional-140475"
	I0908 12:25:21.113199  307961 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-140475"
	I0908 12:25:21.113578  307961 cli_runner.go:164] Run: docker container inspect functional-140475 --format={{.State.Status}}
	I0908 12:25:21.113593  307961 cli_runner.go:164] Run: docker container inspect functional-140475 --format={{.State.Status}}
	I0908 12:25:21.118638  307961 out.go:179] * Verifying Kubernetes components...
	I0908 12:25:21.121537  307961 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 12:25:21.146308  307961 addons.go:238] Setting addon default-storageclass=true in "functional-140475"
	W0908 12:25:21.146323  307961 addons.go:247] addon default-storageclass should already be in state true
	I0908 12:25:21.146350  307961 host.go:66] Checking if "functional-140475" exists ...
	I0908 12:25:21.146846  307961 cli_runner.go:164] Run: docker container inspect functional-140475 --format={{.State.Status}}
	I0908 12:25:21.159299  307961 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0908 12:25:21.162327  307961 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 12:25:21.162339  307961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0908 12:25:21.162416  307961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-140475
	I0908 12:25:21.187233  307961 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0908 12:25:21.187251  307961 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0908 12:25:21.187317  307961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-140475
	I0908 12:25:21.229851  307961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21508-272936/.minikube/machines/functional-140475/id_rsa Username:docker}
	I0908 12:25:21.241810  307961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21508-272936/.minikube/machines/functional-140475/id_rsa Username:docker}
	I0908 12:25:21.355551  307961 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 12:25:21.394959  307961 node_ready.go:35] waiting up to 6m0s for node "functional-140475" to be "Ready" ...
	I0908 12:25:21.399803  307961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 12:25:21.400739  307961 node_ready.go:49] node "functional-140475" is "Ready"
	I0908 12:25:21.400752  307961 node_ready.go:38] duration metric: took 5.76412ms for node "functional-140475" to be "Ready" ...
	I0908 12:25:21.400767  307961 api_server.go:52] waiting for apiserver process to appear ...
	I0908 12:25:21.400814  307961 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 12:25:21.462519  307961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0908 12:25:22.417655  307961 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.017826127s)
	I0908 12:25:22.417705  307961 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.016882263s)
	I0908 12:25:22.417716  307961 api_server.go:72] duration metric: took 1.304964591s to wait for apiserver process to appear ...
	I0908 12:25:22.417720  307961 api_server.go:88] waiting for apiserver healthz status ...
	I0908 12:25:22.417736  307961 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0908 12:25:22.430890  307961 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I0908 12:25:22.431744  307961 api_server.go:141] control plane version: v1.34.0
	I0908 12:25:22.431756  307961 api_server.go:131] duration metric: took 14.031034ms to wait for apiserver health ...
	I0908 12:25:22.431762  307961 system_pods.go:43] waiting for kube-system pods to appear ...
	I0908 12:25:22.432635  307961 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0908 12:25:22.434987  307961 system_pods.go:59] 7 kube-system pods found
	I0908 12:25:22.435003  307961 system_pods.go:61] "coredns-66bc5c9577-p79xg" [4e573d3c-eae9-4bd0-9f33-5b8c667ad3d3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 12:25:22.435009  307961 system_pods.go:61] "etcd-functional-140475" [19b5cd07-26aa-4184-9fa9-fa1945b3f3b4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0908 12:25:22.435018  307961 system_pods.go:61] "kube-apiserver-functional-140475" [1331084f-dab5-460b-b092-7b94d68ddd6b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0908 12:25:22.435024  307961 system_pods.go:61] "kube-controller-manager-functional-140475" [a426e33d-fe9b-4438-a5e6-ee5ea4f03b5f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0908 12:25:22.435028  307961 system_pods.go:61] "kube-proxy-mtw87" [5acd2352-8d2e-41fb-9d3a-e5dec0168fdc] Running
	I0908 12:25:22.435036  307961 system_pods.go:61] "kube-scheduler-functional-140475" [ebd55573-2475-4e76-ac63-a49d1144d78a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0908 12:25:22.435039  307961 system_pods.go:61] "storage-provisioner" [392a6faa-270d-49b3-8967-84c4e22fe60d] Running
	I0908 12:25:22.435045  307961 system_pods.go:74] duration metric: took 3.276793ms to wait for pod list to return data ...
	I0908 12:25:22.435051  307961 default_sa.go:34] waiting for default service account to be created ...
	I0908 12:25:22.435358  307961 addons.go:514] duration metric: took 1.322557707s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0908 12:25:22.437179  307961 default_sa.go:45] found service account: "default"
	I0908 12:25:22.437191  307961 default_sa.go:55] duration metric: took 2.135667ms for default service account to be created ...
	I0908 12:25:22.437198  307961 system_pods.go:116] waiting for k8s-apps to be running ...
	I0908 12:25:22.439815  307961 system_pods.go:86] 7 kube-system pods found
	I0908 12:25:22.439831  307961 system_pods.go:89] "coredns-66bc5c9577-p79xg" [4e573d3c-eae9-4bd0-9f33-5b8c667ad3d3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 12:25:22.439845  307961 system_pods.go:89] "etcd-functional-140475" [19b5cd07-26aa-4184-9fa9-fa1945b3f3b4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0908 12:25:22.439852  307961 system_pods.go:89] "kube-apiserver-functional-140475" [1331084f-dab5-460b-b092-7b94d68ddd6b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0908 12:25:22.439859  307961 system_pods.go:89] "kube-controller-manager-functional-140475" [a426e33d-fe9b-4438-a5e6-ee5ea4f03b5f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0908 12:25:22.439862  307961 system_pods.go:89] "kube-proxy-mtw87" [5acd2352-8d2e-41fb-9d3a-e5dec0168fdc] Running
	I0908 12:25:22.439867  307961 system_pods.go:89] "kube-scheduler-functional-140475" [ebd55573-2475-4e76-ac63-a49d1144d78a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0908 12:25:22.439870  307961 system_pods.go:89] "storage-provisioner" [392a6faa-270d-49b3-8967-84c4e22fe60d] Running
	I0908 12:25:22.439878  307961 system_pods.go:126] duration metric: took 2.673972ms to wait for k8s-apps to be running ...
	I0908 12:25:22.439884  307961 system_svc.go:44] waiting for kubelet service to be running ....
	I0908 12:25:22.439939  307961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 12:25:22.452700  307961 system_svc.go:56] duration metric: took 12.805279ms WaitForService to wait for kubelet
	I0908 12:25:22.452717  307961 kubeadm.go:578] duration metric: took 1.339964883s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 12:25:22.452735  307961 node_conditions.go:102] verifying NodePressure condition ...
	I0908 12:25:22.455620  307961 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0908 12:25:22.455636  307961 node_conditions.go:123] node cpu capacity is 2
	I0908 12:25:22.455646  307961 node_conditions.go:105] duration metric: took 2.906499ms to run NodePressure ...
	I0908 12:25:22.455667  307961 start.go:241] waiting for startup goroutines ...
	I0908 12:25:22.455674  307961 start.go:246] waiting for cluster config update ...
	I0908 12:25:22.455683  307961 start.go:255] writing updated cluster config ...
	I0908 12:25:22.456014  307961 ssh_runner.go:195] Run: rm -f paused
	I0908 12:25:22.459687  307961 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 12:25:22.463329  307961 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-p79xg" in "kube-system" namespace to be "Ready" or be gone ...
	W0908 12:25:24.468773  307961 pod_ready.go:104] pod "coredns-66bc5c9577-p79xg" is not "Ready", error: <nil>
	W0908 12:25:26.969474  307961 pod_ready.go:104] pod "coredns-66bc5c9577-p79xg" is not "Ready", error: <nil>
	W0908 12:25:28.969843  307961 pod_ready.go:104] pod "coredns-66bc5c9577-p79xg" is not "Ready", error: <nil>
	I0908 12:25:30.468629  307961 pod_ready.go:94] pod "coredns-66bc5c9577-p79xg" is "Ready"
	I0908 12:25:30.468643  307961 pod_ready.go:86] duration metric: took 8.005300659s for pod "coredns-66bc5c9577-p79xg" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:25:30.471126  307961 pod_ready.go:83] waiting for pod "etcd-functional-140475" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:25:30.976978  307961 pod_ready.go:94] pod "etcd-functional-140475" is "Ready"
	I0908 12:25:30.976993  307961 pod_ready.go:86] duration metric: took 505.854182ms for pod "etcd-functional-140475" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:25:30.979862  307961 pod_ready.go:83] waiting for pod "kube-apiserver-functional-140475" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:25:31.485298  307961 pod_ready.go:94] pod "kube-apiserver-functional-140475" is "Ready"
	I0908 12:25:31.485314  307961 pod_ready.go:86] duration metric: took 505.439843ms for pod "kube-apiserver-functional-140475" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:25:31.487862  307961 pod_ready.go:83] waiting for pod "kube-controller-manager-functional-140475" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:25:32.495602  307961 pod_ready.go:94] pod "kube-controller-manager-functional-140475" is "Ready"
	I0908 12:25:32.495616  307961 pod_ready.go:86] duration metric: took 1.007741045s for pod "kube-controller-manager-functional-140475" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:25:32.498132  307961 pod_ready.go:83] waiting for pod "kube-proxy-mtw87" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:25:32.867177  307961 pod_ready.go:94] pod "kube-proxy-mtw87" is "Ready"
	I0908 12:25:32.867192  307961 pod_ready.go:86] duration metric: took 369.046153ms for pod "kube-proxy-mtw87" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:25:33.067428  307961 pod_ready.go:83] waiting for pod "kube-scheduler-functional-140475" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:25:33.467201  307961 pod_ready.go:94] pod "kube-scheduler-functional-140475" is "Ready"
	I0908 12:25:33.467215  307961 pod_ready.go:86] duration metric: took 399.772898ms for pod "kube-scheduler-functional-140475" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:25:33.467225  307961 pod_ready.go:40] duration metric: took 11.007516594s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 12:25:33.519347  307961 start.go:617] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0908 12:25:33.522324  307961 out.go:179] * Done! kubectl is now configured to use "functional-140475" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 08 12:25:15 functional-140475 cri-dockerd[7648]: time="2025-09-08T12:25:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a12878798ff69760ac42388835a482a5f5e636f732bf9a37c9d5562aad45d207/resolv.conf as [nameserver 192.168.49.1 search us-east-2.compute.internal options edns0 trust-ad ndots:0]"
	Sep 08 12:25:15 functional-140475 cri-dockerd[7648]: time="2025-09-08T12:25:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/02257bbd60ea0314e825259b9080b3e85cde9933b366505af1789aff1e0e5012/resolv.conf as [nameserver 192.168.49.1 search us-east-2.compute.internal options trust-ad ndots:0 edns0]"
	Sep 08 12:25:15 functional-140475 cri-dockerd[7648]: time="2025-09-08T12:25:15Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-p79xg_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"490cadb2a355eaf59d11e9a7cec1a9a038659bb5b31e5c9ff9f39c5cfd4a5aa7\""
	Sep 08 12:25:19 functional-140475 cri-dockerd[7648]: time="2025-09-08T12:25:19Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Sep 08 12:25:20 functional-140475 cri-dockerd[7648]: time="2025-09-08T12:25:20Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/baa450b79b7955a565e9451e480ec0c5c2b8d3410a4797dd804594a510f2306c/resolv.conf as [nameserver 192.168.49.1 search us-east-2.compute.internal options edns0 trust-ad ndots:0]"
	Sep 08 12:25:36 functional-140475 cri-dockerd[7648]: time="2025-09-08T12:25:36Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8ccd06a52642a1b9b66118dfc27975238dc9f92374ce4996d9262efc2514721c/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Sep 08 12:25:37 functional-140475 dockerd[6902]: time="2025-09-08T12:25:37.109743545Z" level=error msg="Not continuing with pull after error" error="errors:\ndenied: requested access to the resource is denied\nunauthorized: authentication required\n"
	Sep 08 12:25:37 functional-140475 dockerd[6902]: time="2025-09-08T12:25:37.109796855Z" level=info msg="Ignoring extra error returned from registry" error="unauthorized: authentication required"
	Sep 08 12:25:40 functional-140475 dockerd[6902]: time="2025-09-08T12:25:40.362407814Z" level=info msg="ignoring event" container=8ccd06a52642a1b9b66118dfc27975238dc9f92374ce4996d9262efc2514721c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 08 12:25:43 functional-140475 cri-dockerd[7648]: time="2025-09-08T12:25:43Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f7c16945d95288d4b4ce86e643a95d7e205b40179f5651ee90fcb086c64926ae/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Sep 08 12:25:45 functional-140475 cri-dockerd[7648]: time="2025-09-08T12:25:45Z" level=info msg="Stop pulling image docker.io/nginx:alpine: Status: Downloaded newer image for nginx:alpine"
	Sep 08 12:25:50 functional-140475 cri-dockerd[7648]: time="2025-09-08T12:25:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c1527b54711ccf3bc509dcabbeb4e1eae9da0aa871fba78ceaa695e31ddf17bf/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Sep 08 12:25:50 functional-140475 dockerd[6902]: time="2025-09-08T12:25:50.780668109Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 08 12:25:50 functional-140475 cri-dockerd[7648]: time="2025-09-08T12:25:50Z" level=info msg="Stop pulling image docker.io/nginx:latest: latest: Pulling from library/nginx"
	Sep 08 12:25:52 functional-140475 cri-dockerd[7648]: time="2025-09-08T12:25:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/83e116db12522015c28842468ef2bc42cb827d3d4c16e411e7a9d4a03a7240df/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Sep 08 12:25:52 functional-140475 dockerd[6902]: time="2025-09-08T12:25:52.698145855Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 08 12:26:05 functional-140475 dockerd[6902]: time="2025-09-08T12:26:05.465184017Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 08 12:26:07 functional-140475 dockerd[6902]: time="2025-09-08T12:26:07.471049137Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 08 12:26:34 functional-140475 dockerd[6902]: time="2025-09-08T12:26:34.493180176Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 08 12:26:34 functional-140475 dockerd[6902]: time="2025-09-08T12:26:34.716486918Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 08 12:27:21 functional-140475 dockerd[6902]: time="2025-09-08T12:27:21.496496033Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 08 12:27:22 functional-140475 dockerd[6902]: time="2025-09-08T12:27:22.492748418Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 08 12:28:42 functional-140475 dockerd[6902]: time="2025-09-08T12:28:42.565548125Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 08 12:28:42 functional-140475 cri-dockerd[7648]: time="2025-09-08T12:28:42Z" level=info msg="Stop pulling image docker.io/nginx:latest: latest: Pulling from library/nginx"
	Sep 08 12:28:49 functional-140475 dockerd[6902]: time="2025-09-08T12:28:49.451371360Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                           CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a0f0bf25d6321       nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8   4 minutes ago       Running             nginx                     0                   f7c16945d9528       nginx-svc
	28f6049f43b21       138784d87c9c5                                                                   4 minutes ago       Running             coredns                   2                   baa450b79b795       coredns-66bc5c9577-p79xg
	ae4f0a6634637       6fc32d66c1411                                                                   4 minutes ago       Running             kube-proxy                3                   4aa0c34c1b998       kube-proxy-mtw87
	3e56d761c8413       ba04bb24b9575                                                                   4 minutes ago       Running             storage-provisioner       4                   5df3eb436cb71       storage-provisioner
	8e057b7f41de7       a25f5ef9c34c3                                                                   4 minutes ago       Running             kube-scheduler            3                   02257bbd60ea0       kube-scheduler-functional-140475
	264ed758e3516       d291939e99406                                                                   4 minutes ago       Running             kube-apiserver            0                   a12878798ff69       kube-apiserver-functional-140475
	b5a5bff40e315       996be7e86d9b3                                                                   4 minutes ago       Running             kube-controller-manager   3                   46117db363397       kube-controller-manager-functional-140475
	ca1f2eb2a56e6       a1894772a478e                                                                   4 minutes ago       Running             etcd                      2                   7876da18c12d9       etcd-functional-140475
	207c0c3df856b       996be7e86d9b3                                                                   4 minutes ago       Created             kube-controller-manager   2                   ddb7ba696cfb7       kube-controller-manager-functional-140475
	152b108a85c33       a25f5ef9c34c3                                                                   4 minutes ago       Created             kube-scheduler            2                   0101077a99284       kube-scheduler-functional-140475
	4ce992834f477       6fc32d66c1411                                                                   4 minutes ago       Exited              kube-proxy                2                   e1c898d52b181       kube-proxy-mtw87
	6bd6d37f8ca18       ba04bb24b9575                                                                   5 minutes ago       Exited              storage-provisioner       3                   5d4262a965e8d       storage-provisioner
	e96a79d425559       138784d87c9c5                                                                   5 minutes ago       Exited              coredns                   1                   624d0c2700e94       coredns-66bc5c9577-p79xg
	02ea4d507b878       a1894772a478e                                                                   5 minutes ago       Exited              etcd                      1                   f043527a68f42       etcd-functional-140475
	
	
	==> coredns [28f6049f43b2] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53675 - 53593 "HINFO IN 2100446065665056732.4545956877188492551. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.030972055s
	
	
	==> coredns [e96a79d42555] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50588 - 59981 "HINFO IN 3063204499314061978.8433951396937313554. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013894801s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-140475
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-140475
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3f6dd380c737091fd766d425b85ffa6c4f72b9ba
	                    minikube.k8s.io/name=functional-140475
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_08T12_23_00_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Sep 2025 12:22:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-140475
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Sep 2025 12:29:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Sep 2025 12:26:20 +0000   Mon, 08 Sep 2025 12:22:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Sep 2025 12:26:20 +0000   Mon, 08 Sep 2025 12:22:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Sep 2025 12:26:20 +0000   Mon, 08 Sep 2025 12:22:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Sep 2025 12:26:20 +0000   Mon, 08 Sep 2025 12:22:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-140475
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 28cc2cd20fe74a02bfe1586146117dde
	  System UUID:                a03639dc-39eb-4af1-8eff-ffc8a710a78a
	  Boot ID:                    3b69f852-7505-47f7-82de-581d66319e23
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-connect-7d85dfc575-t5bmg          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m8s
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	  kube-system                 coredns-66bc5c9577-p79xg                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     6m46s
	  kube-system                 etcd-functional-140475                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         6m51s
	  kube-system                 kube-apiserver-functional-140475             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m31s
	  kube-system                 kube-controller-manager-functional-140475    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m51s
	  kube-system                 kube-proxy-mtw87                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m46s
	  kube-system                 kube-scheduler-functional-140475             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m51s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (2%)  170Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 6m44s                  kube-proxy       
	  Normal   Starting                 4m30s                  kube-proxy       
	  Normal   Starting                 5m32s                  kube-proxy       
	  Normal   Starting                 7m                     kubelet          Starting kubelet.
	  Warning  CgroupV1                 7m                     kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  7m (x8 over 7m)        kubelet          Node functional-140475 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m (x8 over 7m)        kubelet          Node functional-140475 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m (x7 over 7m)        kubelet          Node functional-140475 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  7m                     kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  6m51s                  kubelet          Node functional-140475 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 6m51s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  6m51s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    6m51s                  kubelet          Node functional-140475 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m51s                  kubelet          Node functional-140475 status is now: NodeHasSufficientPID
	  Normal   Starting                 6m51s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           6m47s                  node-controller  Node functional-140475 event: Registered Node functional-140475 in Controller
	  Warning  ContainerGCFailed        5m51s                  kubelet          rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	  Normal   RegisteredNode           5m30s                  node-controller  Node functional-140475 event: Registered Node functional-140475 in Controller
	  Normal   Starting                 4m37s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 4m37s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  4m37s (x8 over 4m37s)  kubelet          Node functional-140475 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m37s (x8 over 4m37s)  kubelet          Node functional-140475 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m37s (x7 over 4m37s)  kubelet          Node functional-140475 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  4m37s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           4m29s                  node-controller  Node functional-140475 event: Registered Node functional-140475 in Controller
	
	
	==> dmesg <==
	[Sep 8 10:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014150] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.486895] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033827] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.725700] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.488700] kauditd_printk_skb: 36 callbacks suppressed
	[Sep 8 10:40] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Sep 8 11:30] hrtimer: interrupt took 33050655 ns
	[Sep 8 12:15] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [02ea4d507b87] <==
	{"level":"warn","ts":"2025-09-08T12:24:16.639838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:24:16.649417Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:24:16.688735Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:24:16.731763Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:24:16.765271Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:24:16.778536Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:24:16.910398Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39510","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-08T12:24:57.890978Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-08T12:24:57.891064Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-140475","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-09-08T12:24:57.891169Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-08T12:24:57.891444Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-08T12:25:04.896880Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-08T12:25:04.896943Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"warn","ts":"2025-09-08T12:25:04.897086Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-08T12:25:04.897172Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-08T12:25:04.897205Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-08T12:25:04.897262Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-09-08T12:25:04.897312Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-09-08T12:25:04.899523Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-08T12:25:04.899576Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-08T12:25:04.899589Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-08T12:25:04.902221Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-09-08T12:25:04.902306Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-08T12:25:04.902428Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-09-08T12:25:04.902507Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-140475","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [ca1f2eb2a56e] <==
	{"level":"warn","ts":"2025-09-08T12:25:17.537507Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:25:17.574971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:25:17.596346Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:25:17.631012Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:25:17.686593Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:25:17.692537Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:25:17.711751Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:25:17.736991Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:25:17.763646Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:25:17.841197Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:25:17.852797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:25:17.876013Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:25:17.907257Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:25:17.950331Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:25:17.980848Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:25:18.017529Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:25:18.061396Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:25:18.088202Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:25:18.122939Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:25:18.166770Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:25:18.220953Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:25:18.244197Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:25:18.257729Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:25:18.273735Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:25:18.337140Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42186","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 12:29:51 up  2:12,  0 users,  load average: 0.31, 1.01, 1.05
	Linux functional-140475 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [264ed758e351] <==
	I0908 12:25:19.444010       1 aggregator.go:171] initial CRD sync complete...
	I0908 12:25:19.444219       1 autoregister_controller.go:144] Starting autoregister controller
	I0908 12:25:19.444301       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0908 12:25:19.444371       1 cache.go:39] Caches are synced for autoregister controller
	I0908 12:25:19.444844       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0908 12:25:19.469303       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I0908 12:25:19.491878       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0908 12:25:20.092190       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0908 12:25:20.201144       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0908 12:25:20.963954       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0908 12:25:21.020287       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0908 12:25:21.061359       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0908 12:25:21.073251       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0908 12:25:22.889532       1 controller.go:667] quota admission added evaluator for: endpoints
	I0908 12:25:22.987473       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0908 12:25:23.086542       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0908 12:25:36.431394       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.101.63.105"}
	I0908 12:25:43.496219       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.100.95.33"}
	I0908 12:25:52.054365       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.99.5.255"}
	I0908 12:26:33.092750       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 12:26:44.800277       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 12:27:49.066443       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 12:27:58.039300       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 12:28:52.414335       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 12:29:25.650405       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [207c0c3df856] <==
	
	
	==> kube-controller-manager [b5a5bff40e31] <==
	I0908 12:25:22.680039       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0908 12:25:22.680380       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0908 12:25:22.680386       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0908 12:25:22.680138       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0908 12:25:22.681216       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0908 12:25:22.682300       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0908 12:25:22.685677       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0908 12:25:22.686301       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0908 12:25:22.691882       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I0908 12:25:22.696116       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0908 12:25:22.698431       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0908 12:25:22.711012       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0908 12:25:22.714314       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0908 12:25:22.723484       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0908 12:25:22.729486       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0908 12:25:22.729518       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0908 12:25:22.729912       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0908 12:25:22.730035       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0908 12:25:22.729535       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0908 12:25:22.729765       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0908 12:25:22.731485       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0908 12:25:22.731720       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0908 12:25:22.734707       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0908 12:25:22.736252       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-140475"
	I0908 12:25:22.737513       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	
	
	==> kube-proxy [4ce992834f47] <==
	I0908 12:25:10.488602       1 server_linux.go:53] "Using iptables proxy"
	I0908 12:25:10.606414       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E0908 12:25:10.607244       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-140475&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	
	
	==> kube-proxy [ae4f0a663463] <==
	I0908 12:25:20.785069       1 server_linux.go:53] "Using iptables proxy"
	I0908 12:25:20.921442       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0908 12:25:21.025213       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0908 12:25:21.025245       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0908 12:25:21.025320       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0908 12:25:21.179249       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0908 12:25:21.182406       1 server_linux.go:132] "Using iptables Proxier"
	I0908 12:25:21.227302       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0908 12:25:21.227578       1 server.go:527] "Version info" version="v1.34.0"
	I0908 12:25:21.227593       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 12:25:21.229207       1 config.go:200] "Starting service config controller"
	I0908 12:25:21.229217       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0908 12:25:21.229243       1 config.go:106] "Starting endpoint slice config controller"
	I0908 12:25:21.229247       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0908 12:25:21.229258       1 config.go:403] "Starting serviceCIDR config controller"
	I0908 12:25:21.229262       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0908 12:25:21.237957       1 config.go:309] "Starting node config controller"
	I0908 12:25:21.237979       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0908 12:25:21.237987       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0908 12:25:21.330477       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0908 12:25:21.340787       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0908 12:25:21.340836       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [152b108a85c3] <==
	
	
	==> kube-scheduler [8e057b7f41de] <==
	I0908 12:25:18.036734       1 serving.go:386] Generated self-signed cert in-memory
	I0908 12:25:19.475887       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0908 12:25:19.475921       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 12:25:19.484805       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0908 12:25:19.485031       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I0908 12:25:19.485180       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 12:25:19.485277       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 12:25:19.485375       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0908 12:25:19.485465       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0908 12:25:19.486343       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0908 12:25:19.487832       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0908 12:25:19.585623       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0908 12:25:19.585742       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 12:25:19.585630       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Sep 08 12:27:58 functional-140475 kubelet[8797]: E0908 12:27:58.237161    8797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-t5bmg" podUID="3d104d49-ee46-419f-87a7-43b430053f2b"
	Sep 08 12:28:01 functional-140475 kubelet[8797]: E0908 12:28:01.237926    8797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="ea595f80-e42a-47b4-bbbb-a8b2b4a4b3b1"
	Sep 08 12:28:13 functional-140475 kubelet[8797]: E0908 12:28:13.237879    8797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-t5bmg" podUID="3d104d49-ee46-419f-87a7-43b430053f2b"
	Sep 08 12:28:13 functional-140475 kubelet[8797]: E0908 12:28:13.237903    8797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="ea595f80-e42a-47b4-bbbb-a8b2b4a4b3b1"
	Sep 08 12:28:24 functional-140475 kubelet[8797]: E0908 12:28:24.238502    8797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-t5bmg" podUID="3d104d49-ee46-419f-87a7-43b430053f2b"
	Sep 08 12:28:28 functional-140475 kubelet[8797]: E0908 12:28:28.237630    8797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="ea595f80-e42a-47b4-bbbb-a8b2b4a4b3b1"
	Sep 08 12:28:37 functional-140475 kubelet[8797]: E0908 12:28:37.237783    8797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-t5bmg" podUID="3d104d49-ee46-419f-87a7-43b430053f2b"
	Sep 08 12:28:42 functional-140475 kubelet[8797]: E0908 12:28:42.568814    8797 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Sep 08 12:28:42 functional-140475 kubelet[8797]: E0908 12:28:42.568929    8797 kuberuntime_image.go:43] "Failed to pull image" err="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Sep 08 12:28:42 functional-140475 kubelet[8797]: E0908 12:28:42.569000    8797 kuberuntime_manager.go:1449] "Unhandled Error" err="container myfrontend start failed in pod sp-pod_default(ea595f80-e42a-47b4-bbbb-a8b2b4a4b3b1): ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 08 12:28:42 functional-140475 kubelet[8797]: E0908 12:28:42.569031    8797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="ea595f80-e42a-47b4-bbbb-a8b2b4a4b3b1"
	Sep 08 12:28:49 functional-140475 kubelet[8797]: E0908 12:28:49.455220    8797 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Sep 08 12:28:49 functional-140475 kubelet[8797]: E0908 12:28:49.455319    8797 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Sep 08 12:28:49 functional-140475 kubelet[8797]: E0908 12:28:49.455416    8797 kuberuntime_manager.go:1449] "Unhandled Error" err="container echo-server start failed in pod hello-node-connect-7d85dfc575-t5bmg_default(3d104d49-ee46-419f-87a7-43b430053f2b): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 08 12:28:49 functional-140475 kubelet[8797]: E0908 12:28:49.455450    8797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-t5bmg" podUID="3d104d49-ee46-419f-87a7-43b430053f2b"
	Sep 08 12:28:54 functional-140475 kubelet[8797]: E0908 12:28:54.244342    8797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="ea595f80-e42a-47b4-bbbb-a8b2b4a4b3b1"
	Sep 08 12:29:00 functional-140475 kubelet[8797]: E0908 12:29:00.256319    8797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-t5bmg" podUID="3d104d49-ee46-419f-87a7-43b430053f2b"
	Sep 08 12:29:06 functional-140475 kubelet[8797]: E0908 12:29:06.237869    8797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="ea595f80-e42a-47b4-bbbb-a8b2b4a4b3b1"
	Sep 08 12:29:11 functional-140475 kubelet[8797]: E0908 12:29:11.237182    8797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-t5bmg" podUID="3d104d49-ee46-419f-87a7-43b430053f2b"
	Sep 08 12:29:19 functional-140475 kubelet[8797]: E0908 12:29:19.237686    8797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="ea595f80-e42a-47b4-bbbb-a8b2b4a4b3b1"
	Sep 08 12:29:25 functional-140475 kubelet[8797]: E0908 12:29:25.237702    8797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-t5bmg" podUID="3d104d49-ee46-419f-87a7-43b430053f2b"
	Sep 08 12:29:34 functional-140475 kubelet[8797]: E0908 12:29:34.238424    8797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="ea595f80-e42a-47b4-bbbb-a8b2b4a4b3b1"
	Sep 08 12:29:36 functional-140475 kubelet[8797]: E0908 12:29:36.237334    8797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-t5bmg" podUID="3d104d49-ee46-419f-87a7-43b430053f2b"
	Sep 08 12:29:48 functional-140475 kubelet[8797]: E0908 12:29:48.237130    8797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="ea595f80-e42a-47b4-bbbb-a8b2b4a4b3b1"
	Sep 08 12:29:48 functional-140475 kubelet[8797]: E0908 12:29:48.238268    8797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-t5bmg" podUID="3d104d49-ee46-419f-87a7-43b430053f2b"
	
	
	==> storage-provisioner [3e56d761c841] <==
	W0908 12:29:27.331895       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:29:29.335212       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:29:29.339557       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:29:31.342775       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:29:31.347691       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:29:33.352748       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:29:33.360510       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:29:35.363861       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:29:35.368511       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:29:37.371810       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:29:37.379643       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:29:39.383205       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:29:39.387779       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:29:41.390341       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:29:41.396939       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:29:43.400659       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:29:43.405944       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:29:45.409239       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:29:45.414348       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:29:47.417149       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:29:47.421912       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:29:49.424964       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:29:49.429337       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:29:51.432920       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:29:51.437882       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [6bd6d37f8ca1] <==
	I0908 12:24:39.805610       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0908 12:24:39.817707       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0908 12:24:39.818001       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0908 12:24:39.827051       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:24:43.281855       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:24:47.542472       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:24:51.141255       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:24:54.195301       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:24:57.217075       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:24:57.222388       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0908 12:24:57.222556       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0908 12:24:57.222806       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-140475_be3659dd-d2e5-49cd-8419-febf302bbd52!
	I0908 12:24:57.224364       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6f4e765b-be4a-4c1c-98b1-2642ed77f8a2", APIVersion:"v1", ResourceVersion:"613", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-140475_be3659dd-d2e5-49cd-8419-febf302bbd52 became leader
	W0908 12:24:57.228147       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:24:57.233779       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0908 12:24:57.323828       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-140475_be3659dd-d2e5-49cd-8419-febf302bbd52!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-140475 -n functional-140475
helpers_test.go:269: (dbg) Run:  kubectl --context functional-140475 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-node-connect-7d85dfc575-t5bmg sp-pod
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-140475 describe pod hello-node-connect-7d85dfc575-t5bmg sp-pod
helpers_test.go:290: (dbg) kubectl --context functional-140475 describe pod hello-node-connect-7d85dfc575-t5bmg sp-pod:

                                                
                                                
-- stdout --
	Name:             hello-node-connect-7d85dfc575-t5bmg
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-140475/192.168.49.2
	Start Time:       Mon, 08 Sep 2025 12:25:51 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:           10.244.0.10
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sks76 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-sks76:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  4m                    default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-t5bmg to functional-140475
	  Normal   Pulling    63s (x5 over 4m)      kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     63s (x5 over 4m)      kubelet            Failed to pull image "kicbase/echo-server": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     63s (x5 over 4m)      kubelet            Error: ErrImagePull
	  Warning  Failed     16s (x15 over 3m59s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4s (x16 over 3m59s)   kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-140475/192.168.49.2
	Start Time:       Mon, 08 Sep 2025 12:25:49 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-h6sml (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-h6sml:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  4m2s                   default-scheduler  Successfully assigned default/sp-pod to functional-140475
	  Warning  Failed     2m31s (x3 over 3m47s)  kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    70s (x5 over 4m2s)     kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     70s (x2 over 4m2s)     kubelet            Failed to pull image "docker.io/nginx": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     70s (x5 over 4m2s)     kubelet            Error: ErrImagePull
	  Normal   BackOff    4s (x15 over 4m1s)     kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     4s (x15 over 4m1s)     kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (249.19s)

                                                
                                    

Test pass (318/347)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 5.93
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.09
9 TestDownloadOnly/v1.28.0/DeleteAll 0.22
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.34.0/json-events 5.03
13 TestDownloadOnly/v1.34.0/preload-exists 0
17 TestDownloadOnly/v1.34.0/LogsDuration 0.16
18 TestDownloadOnly/v1.34.0/DeleteAll 0.35
19 TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds 0.22
21 TestBinaryMirror 0.59
22 TestOffline 61.27
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 148.24
29 TestAddons/serial/Volcano 43.2
31 TestAddons/serial/GCPAuth/Namespaces 0.18
32 TestAddons/serial/GCPAuth/FakeCredentials 9.9
35 TestAddons/parallel/Registry 16.28
36 TestAddons/parallel/RegistryCreds 0.95
37 TestAddons/parallel/Ingress 21.38
38 TestAddons/parallel/InspektorGadget 6.22
39 TestAddons/parallel/MetricsServer 7.19
41 TestAddons/parallel/CSI 41.48
42 TestAddons/parallel/Headlamp 17.85
43 TestAddons/parallel/CloudSpanner 5.6
44 TestAddons/parallel/LocalPath 52.43
45 TestAddons/parallel/NvidiaDevicePlugin 5.66
46 TestAddons/parallel/Yakd 11.77
48 TestAddons/StoppedEnableDisable 11.2
49 TestCertOptions 44.39
50 TestCertExpiration 264.4
51 TestDockerFlags 49.78
52 TestForceSystemdFlag 42.48
53 TestForceSystemdEnv 43.61
59 TestErrorSpam/setup 31.74
60 TestErrorSpam/start 0.8
61 TestErrorSpam/status 1.06
62 TestErrorSpam/pause 1.44
63 TestErrorSpam/unpause 1.54
64 TestErrorSpam/stop 10.96
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 73.92
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 50.01
71 TestFunctional/serial/KubeContext 0.06
72 TestFunctional/serial/KubectlGetPods 0.11
75 TestFunctional/serial/CacheCmd/cache/add_remote 2.86
76 TestFunctional/serial/CacheCmd/cache/add_local 1.03
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
78 TestFunctional/serial/CacheCmd/cache/list 0.07
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.32
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.58
81 TestFunctional/serial/CacheCmd/cache/delete 0.11
82 TestFunctional/serial/MinikubeKubectlCmd 0.15
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
84 TestFunctional/serial/ExtraConfig 54.8
85 TestFunctional/serial/ComponentHealth 0.09
86 TestFunctional/serial/LogsCmd 1.26
87 TestFunctional/serial/LogsFileCmd 1.28
88 TestFunctional/serial/InvalidService 4.95
90 TestFunctional/parallel/ConfigCmd 0.47
92 TestFunctional/parallel/DryRun 0.59
93 TestFunctional/parallel/InternationalLanguage 0.28
94 TestFunctional/parallel/StatusCmd 1.35
99 TestFunctional/parallel/AddonsCmd 0.14
102 TestFunctional/parallel/SSHCmd 0.75
103 TestFunctional/parallel/CpCmd 2.31
105 TestFunctional/parallel/FileSync 0.29
106 TestFunctional/parallel/CertSync 1.62
110 TestFunctional/parallel/NodeLabels 0.09
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.29
114 TestFunctional/parallel/License 0.33
116 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.7
117 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.45
120 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
121 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
125 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
126 TestFunctional/parallel/ServiceCmd/DeployApp 351.21
127 TestFunctional/parallel/ServiceCmd/List 0.52
128 TestFunctional/parallel/ServiceCmd/JSONOutput 0.52
129 TestFunctional/parallel/ServiceCmd/HTTPS 0.57
130 TestFunctional/parallel/ServiceCmd/Format 0.37
131 TestFunctional/parallel/ServiceCmd/URL 0.4
132 TestFunctional/parallel/ProfileCmd/profile_not_create 0.44
133 TestFunctional/parallel/ProfileCmd/profile_list 0.44
134 TestFunctional/parallel/ProfileCmd/profile_json_output 0.41
135 TestFunctional/parallel/MountCmd/any-port 8.56
136 TestFunctional/parallel/MountCmd/specific-port 2.12
137 TestFunctional/parallel/MountCmd/VerifyCleanup 2.59
138 TestFunctional/parallel/Version/short 0.06
139 TestFunctional/parallel/Version/components 1.03
140 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
141 TestFunctional/parallel/ImageCommands/ImageListTable 0.25
142 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
143 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
144 TestFunctional/parallel/ImageCommands/ImageBuild 3.45
145 TestFunctional/parallel/ImageCommands/Setup 0.77
146 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.95
147 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.86
148 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.06
149 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.33
150 TestFunctional/parallel/ImageCommands/ImageRemove 0.41
151 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.59
152 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.36
153 TestFunctional/parallel/UpdateContextCmd/no_changes 0.15
154 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.14
155 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.15
156 TestFunctional/parallel/DockerEnv/bash 1.04
157 TestFunctional/delete_echo-server_images 0.11
158 TestFunctional/delete_my-image_image 0.07
159 TestFunctional/delete_minikube_cached_images 0.03
164 TestMultiControlPlane/serial/StartCluster 140.45
165 TestMultiControlPlane/serial/DeployApp 41.88
166 TestMultiControlPlane/serial/PingHostFromPods 1.7
167 TestMultiControlPlane/serial/AddWorkerNode 20.77
168 TestMultiControlPlane/serial/NodeLabels 0.12
169 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.47
170 TestMultiControlPlane/serial/CopyFile 20.86
171 TestMultiControlPlane/serial/StopSecondaryNode 12.46
172 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.8
173 TestMultiControlPlane/serial/RestartSecondaryNode 89.63
174 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.98
175 TestMultiControlPlane/serial/RestartClusterKeepsNodes 184.72
176 TestMultiControlPlane/serial/DeleteSecondaryNode 12.01
177 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.79
178 TestMultiControlPlane/serial/StopCluster 32.74
179 TestMultiControlPlane/serial/RestartCluster 108.2
180 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.77
181 TestMultiControlPlane/serial/AddSecondaryNode 38.96
182 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.09
185 TestImageBuild/serial/Setup 31.85
186 TestImageBuild/serial/NormalBuild 1.62
187 TestImageBuild/serial/BuildWithBuildArg 0.92
188 TestImageBuild/serial/BuildWithDockerIgnore 0.72
189 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.96
193 TestJSONOutput/start/Command 43.34
194 TestJSONOutput/start/Audit 0
196 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/pause/Command 0.64
200 TestJSONOutput/pause/Audit 0
202 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
205 TestJSONOutput/unpause/Command 0.52
206 TestJSONOutput/unpause/Audit 0
208 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
209 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
211 TestJSONOutput/stop/Command 5.85
212 TestJSONOutput/stop/Audit 0
214 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
215 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
216 TestErrorJSONOutput 0.24
218 TestKicCustomNetwork/create_custom_network 35.18
219 TestKicCustomNetwork/use_default_bridge_network 34.59
220 TestKicExistingNetwork 32.39
221 TestKicCustomSubnet 36.05
222 TestKicStaticIP 33.95
223 TestMainNoArgs 0.06
224 TestMinikubeProfile 69.95
227 TestMountStart/serial/StartWithMountFirst 8.04
228 TestMountStart/serial/VerifyMountFirst 0.26
229 TestMountStart/serial/StartWithMountSecond 8.72
230 TestMountStart/serial/VerifyMountSecond 0.25
231 TestMountStart/serial/DeleteFirst 1.51
232 TestMountStart/serial/VerifyMountPostDelete 0.27
233 TestMountStart/serial/Stop 1.2
234 TestMountStart/serial/RestartStopped 8.88
235 TestMountStart/serial/VerifyMountPostStop 0.26
238 TestMultiNode/serial/FreshStart2Nodes 67.82
239 TestMultiNode/serial/DeployApp2Nodes 48.04
240 TestMultiNode/serial/PingHostFrom2Pods 1.07
241 TestMultiNode/serial/AddNode 15.82
242 TestMultiNode/serial/MultiNodeLabels 0.12
243 TestMultiNode/serial/ProfileList 0.94
244 TestMultiNode/serial/CopyFile 10.63
245 TestMultiNode/serial/StopNode 2.28
246 TestMultiNode/serial/StartAfterStop 9.06
247 TestMultiNode/serial/RestartKeepsNodes 74.86
248 TestMultiNode/serial/DeleteNode 5.65
249 TestMultiNode/serial/StopMultiNode 21.77
250 TestMultiNode/serial/RestartMultiNode 51.45
251 TestMultiNode/serial/ValidateNameConflict 36.6
256 TestPreload 149.63
258 TestScheduledStopUnix 105.85
259 TestSkaffold 145
261 TestInsufficientStorage 10.91
262 TestRunningBinaryUpgrade 88.74
264 TestKubernetesUpgrade 376.75
265 TestMissingContainerUpgrade 90.31
277 TestStoppedBinaryUpgrade/Setup 1.02
278 TestStoppedBinaryUpgrade/Upgrade 76.46
279 TestStoppedBinaryUpgrade/MinikubeLogs 1.15
281 TestPause/serial/Start 78.2
282 TestPause/serial/SecondStartNoReconfiguration 55.09
283 TestPause/serial/Pause 0.64
284 TestPause/serial/VerifyStatus 0.4
285 TestPause/serial/Unpause 0.7
286 TestPause/serial/PauseAgain 0.7
287 TestPause/serial/DeletePaused 2.32
288 TestPause/serial/VerifyDeletedResources 0.43
297 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
298 TestNoKubernetes/serial/StartWithK8s 38.83
299 TestNoKubernetes/serial/StartWithStopK8s 18.55
300 TestNoKubernetes/serial/Start 7.19
301 TestNoKubernetes/serial/VerifyK8sNotRunning 0.3
302 TestNoKubernetes/serial/ProfileList 1.2
303 TestNoKubernetes/serial/Stop 1.21
304 TestNoKubernetes/serial/StartNoArgs 10.33
305 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.46
306 TestNetworkPlugins/group/auto/Start 74.52
307 TestNetworkPlugins/group/kindnet/Start 73.45
308 TestNetworkPlugins/group/auto/KubeletFlags 0.38
309 TestNetworkPlugins/group/auto/NetCatPod 10.41
310 TestNetworkPlugins/group/auto/DNS 0.3
311 TestNetworkPlugins/group/auto/Localhost 0.25
312 TestNetworkPlugins/group/auto/HairPin 0.23
313 TestNetworkPlugins/group/calico/Start 84.67
314 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
315 TestNetworkPlugins/group/kindnet/KubeletFlags 0.34
316 TestNetworkPlugins/group/kindnet/NetCatPod 10.35
317 TestNetworkPlugins/group/kindnet/DNS 0.25
318 TestNetworkPlugins/group/kindnet/Localhost 0.19
319 TestNetworkPlugins/group/kindnet/HairPin 0.22
320 TestNetworkPlugins/group/custom-flannel/Start 64.65
321 TestNetworkPlugins/group/calico/ControllerPod 6.01
322 TestNetworkPlugins/group/calico/KubeletFlags 0.35
323 TestNetworkPlugins/group/calico/NetCatPod 11.36
324 TestNetworkPlugins/group/calico/DNS 0.31
325 TestNetworkPlugins/group/calico/Localhost 0.35
326 TestNetworkPlugins/group/calico/HairPin 0.26
327 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.41
328 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.52
329 TestNetworkPlugins/group/false/Start 78.27
330 TestNetworkPlugins/group/custom-flannel/DNS 0.27
331 TestNetworkPlugins/group/custom-flannel/Localhost 0.21
332 TestNetworkPlugins/group/custom-flannel/HairPin 0.21
333 TestNetworkPlugins/group/enable-default-cni/Start 77.5
334 TestNetworkPlugins/group/false/KubeletFlags 0.3
335 TestNetworkPlugins/group/false/NetCatPod 10.32
336 TestNetworkPlugins/group/false/DNS 0.21
337 TestNetworkPlugins/group/false/Localhost 0.16
338 TestNetworkPlugins/group/false/HairPin 0.17
339 TestNetworkPlugins/group/flannel/Start 79.93
340 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.44
341 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.36
342 TestNetworkPlugins/group/enable-default-cni/DNS 0.24
343 TestNetworkPlugins/group/enable-default-cni/Localhost 0.22
344 TestNetworkPlugins/group/enable-default-cni/HairPin 0.25
345 TestNetworkPlugins/group/bridge/Start 82.36
346 TestNetworkPlugins/group/flannel/ControllerPod 5.03
347 TestNetworkPlugins/group/flannel/KubeletFlags 0.58
348 TestNetworkPlugins/group/flannel/NetCatPod 11.37
349 TestNetworkPlugins/group/flannel/DNS 0.18
350 TestNetworkPlugins/group/flannel/Localhost 0.16
351 TestNetworkPlugins/group/flannel/HairPin 0.17
352 TestNetworkPlugins/group/kubenet/Start 86.09
353 TestNetworkPlugins/group/bridge/KubeletFlags 0.4
354 TestNetworkPlugins/group/bridge/NetCatPod 9.49
355 TestNetworkPlugins/group/bridge/DNS 0.3
356 TestNetworkPlugins/group/bridge/Localhost 0.2
357 TestNetworkPlugins/group/bridge/HairPin 0.3
359 TestStartStop/group/old-k8s-version/serial/FirstStart 89.43
360 TestNetworkPlugins/group/kubenet/KubeletFlags 0.38
361 TestNetworkPlugins/group/kubenet/NetCatPod 12.38
362 TestNetworkPlugins/group/kubenet/DNS 0.19
363 TestNetworkPlugins/group/kubenet/Localhost 0.18
364 TestNetworkPlugins/group/kubenet/HairPin 0.18
366 TestStartStop/group/no-preload/serial/FirstStart 61.34
367 TestStartStop/group/old-k8s-version/serial/DeployApp 11.56
368 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.75
369 TestStartStop/group/old-k8s-version/serial/Stop 11.03
370 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.26
371 TestStartStop/group/old-k8s-version/serial/SecondStart 54.7
372 TestStartStop/group/no-preload/serial/DeployApp 10.5
373 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.51
374 TestStartStop/group/no-preload/serial/Stop 11.12
375 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
376 TestStartStop/group/no-preload/serial/SecondStart 28.89
377 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
378 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.16
379 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.38
380 TestStartStop/group/old-k8s-version/serial/Pause 3.84
382 TestStartStop/group/embed-certs/serial/FirstStart 83.02
383 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 9
384 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.12
385 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.27
386 TestStartStop/group/no-preload/serial/Pause 3.88
388 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 56.07
389 TestStartStop/group/embed-certs/serial/DeployApp 9.4
390 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.54
391 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.12
392 TestStartStop/group/embed-certs/serial/Stop 11.2
393 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.51
394 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.15
395 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
396 TestStartStop/group/embed-certs/serial/SecondStart 61.99
397 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.25
398 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 61.97
399 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
400 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
401 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
402 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.16
403 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.26
404 TestStartStop/group/embed-certs/serial/Pause 3.63
405 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.29
406 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.81
408 TestStartStop/group/newest-cni/serial/FirstStart 38.38
409 TestStartStop/group/newest-cni/serial/DeployApp 0
410 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.1
411 TestStartStop/group/newest-cni/serial/Stop 8.96
412 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
413 TestStartStop/group/newest-cni/serial/SecondStart 17.17
414 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
415 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
416 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.28
417 TestStartStop/group/newest-cni/serial/Pause 3.03
x
+
TestDownloadOnly/v1.28.0/json-events (5.93s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-360717 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-360717 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (5.927661315s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (5.93s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I0908 12:16:02.645618  274796 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime docker
I0908 12:16:02.645698  274796 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21508-272936/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-360717
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-360717: exit status 85 (91.305639ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                     ARGS                                                                                      │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-360717 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker │ download-only-360717 │ jenkins │ v1.36.0 │ 08 Sep 25 12:15 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 12:15:56
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 12:15:56.764844  274802 out.go:360] Setting OutFile to fd 1 ...
	I0908 12:15:56.765044  274802 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:15:56.765075  274802 out.go:374] Setting ErrFile to fd 2...
	I0908 12:15:56.765096  274802 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:15:56.765369  274802 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-272936/.minikube/bin
	W0908 12:15:56.765533  274802 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21508-272936/.minikube/config/config.json: open /home/jenkins/minikube-integration/21508-272936/.minikube/config/config.json: no such file or directory
	I0908 12:15:56.765963  274802 out.go:368] Setting JSON to true
	I0908 12:15:56.766813  274802 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7107,"bootTime":1757326650,"procs":149,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0908 12:15:56.766909  274802 start.go:140] virtualization:  
	I0908 12:15:56.770990  274802 out.go:99] [download-only-360717] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	W0908 12:15:56.771187  274802 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/21508-272936/.minikube/cache/preloaded-tarball: no such file or directory
	I0908 12:15:56.771301  274802 notify.go:220] Checking for updates...
	I0908 12:15:56.775294  274802 out.go:171] MINIKUBE_LOCATION=21508
	I0908 12:15:56.779331  274802 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 12:15:56.782254  274802 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21508-272936/kubeconfig
	I0908 12:15:56.785113  274802 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-272936/.minikube
	I0908 12:15:56.788118  274802 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W0908 12:15:56.793766  274802 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0908 12:15:56.794018  274802 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 12:15:56.828851  274802 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 12:15:56.828982  274802 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 12:15:56.884362  274802 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-09-08 12:15:56.87525016 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 12:15:56.884465  274802 docker.go:318] overlay module found
	I0908 12:15:56.887514  274802 out.go:99] Using the docker driver based on user configuration
	I0908 12:15:56.887554  274802 start.go:304] selected driver: docker
	I0908 12:15:56.887569  274802 start.go:918] validating driver "docker" against <nil>
	I0908 12:15:56.887678  274802 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 12:15:56.944352  274802 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-09-08 12:15:56.935294614 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 12:15:56.944510  274802 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0908 12:15:56.944803  274802 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I0908 12:15:56.944960  274802 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0908 12:15:56.948035  274802 out.go:171] Using Docker driver with root privileges
	I0908 12:15:56.950955  274802 cni.go:84] Creating CNI manager for ""
	I0908 12:15:56.951036  274802 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0908 12:15:56.951053  274802 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0908 12:15:56.951131  274802 start.go:348] cluster config:
	{Name:download-only-360717 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-360717 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 12:15:56.954063  274802 out.go:99] Starting "download-only-360717" primary control-plane node in "download-only-360717" cluster
	I0908 12:15:56.954088  274802 cache.go:123] Beginning downloading kic base image for docker with docker
	I0908 12:15:56.956877  274802 out.go:99] Pulling base image v0.0.47-1756980985-21488 ...
	I0908 12:15:56.956905  274802 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I0908 12:15:56.957066  274802 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon
	I0908 12:15:56.973237  274802 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 to local cache
	I0908 12:15:56.973981  274802 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local cache directory
	I0908 12:15:56.974105  274802 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 to local cache
	I0908 12:15:57.021701  274802 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4
	I0908 12:15:57.021739  274802 cache.go:58] Caching tarball of preloaded images
	I0908 12:15:57.022503  274802 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I0908 12:15:57.025874  274802 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I0908 12:15:57.025924  274802 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4 ...
	I0908 12:15:57.107122  274802 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4?checksum=md5:002a73d62a3b066a08573cf3da2c8cb4 -> /home/jenkins/minikube-integration/21508-272936/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4
	I0908 12:16:00.299803  274802 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4 ...
	I0908 12:16:00.299931  274802 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/21508-272936/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4 ...
	I0908 12:16:01.218393  274802 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on docker
	I0908 12:16:01.218773  274802 profile.go:143] Saving config to /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/download-only-360717/config.json ...
	I0908 12:16:01.218806  274802 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/download-only-360717/config.json: {Name:mk500ff899d62ab1d9661a6c712158f5f5c90608 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:16:01.219523  274802 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I0908 12:16:01.219704  274802 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/21508-272936/.minikube/cache/linux/arm64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-360717 host does not exist
	  To start a cluster, run: "minikube start -p download-only-360717"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-360717
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/json-events (5.03s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-060804 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-060804 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (5.028832836s)
--- PASS: TestDownloadOnly/v1.34.0/json-events (5.03s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/preload-exists
I0908 12:16:08.139198  274796 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
I0908 12:16:08.139244  274796 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21508-272936/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/LogsDuration (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-060804
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-060804: exit status 85 (156.275424ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                     ARGS                                                                                      │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-360717 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker │ download-only-360717 │ jenkins │ v1.36.0 │ 08 Sep 25 12:15 UTC │                     │
	│ delete  │ --all                                                                                                                                                                         │ minikube             │ jenkins │ v1.36.0 │ 08 Sep 25 12:16 UTC │ 08 Sep 25 12:16 UTC │
	│ delete  │ -p download-only-360717                                                                                                                                                       │ download-only-360717 │ jenkins │ v1.36.0 │ 08 Sep 25 12:16 UTC │ 08 Sep 25 12:16 UTC │
	│ start   │ -o=json --download-only -p download-only-060804 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=docker --driver=docker  --container-runtime=docker │ download-only-060804 │ jenkins │ v1.36.0 │ 08 Sep 25 12:16 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 12:16:03
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 12:16:03.153528  275003 out.go:360] Setting OutFile to fd 1 ...
	I0908 12:16:03.153715  275003 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:16:03.153741  275003 out.go:374] Setting ErrFile to fd 2...
	I0908 12:16:03.153762  275003 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:16:03.154062  275003 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-272936/.minikube/bin
	I0908 12:16:03.154499  275003 out.go:368] Setting JSON to true
	I0908 12:16:03.155375  275003 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7114,"bootTime":1757326650,"procs":143,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0908 12:16:03.155468  275003 start.go:140] virtualization:  
	I0908 12:16:03.158906  275003 out.go:99] [download-only-060804] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	I0908 12:16:03.159194  275003 notify.go:220] Checking for updates...
	I0908 12:16:03.162083  275003 out.go:171] MINIKUBE_LOCATION=21508
	I0908 12:16:03.165155  275003 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 12:16:03.168139  275003 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21508-272936/kubeconfig
	I0908 12:16:03.171022  275003 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-272936/.minikube
	I0908 12:16:03.174020  275003 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W0908 12:16:03.179912  275003 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0908 12:16:03.180214  275003 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 12:16:03.209767  275003 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 12:16:03.209935  275003 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 12:16:03.267068  275003 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-09-08 12:16:03.257735041 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 12:16:03.267172  275003 docker.go:318] overlay module found
	I0908 12:16:03.270185  275003 out.go:99] Using the docker driver based on user configuration
	I0908 12:16:03.270230  275003 start.go:304] selected driver: docker
	I0908 12:16:03.270244  275003 start.go:918] validating driver "docker" against <nil>
	I0908 12:16:03.270351  275003 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 12:16:03.324643  275003 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-09-08 12:16:03.315859425 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 12:16:03.324806  275003 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0908 12:16:03.325066  275003 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I0908 12:16:03.325275  275003 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0908 12:16:03.328346  275003 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-060804 host does not exist
	  To start a cluster, run: "minikube start -p download-only-060804"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.0/LogsDuration (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAll (0.35s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.0/DeleteAll (0.35s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-060804
--- PASS: TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.22s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
I0908 12:16:10.044696  274796 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-564881 --alsologtostderr --binary-mirror http://127.0.0.1:43393 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-564881" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-564881
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestOffline (61.27s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-arm64 start -p offline-docker-115443 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-arm64 start -p offline-docker-115443 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=docker: (59.046901939s)
helpers_test.go:175: Cleaning up "offline-docker-115443" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p offline-docker-115443
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p offline-docker-115443: (2.221033315s)
--- PASS: TestOffline (61.27s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-812729
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-812729: exit status 85 (72.252183ms)

                                                
                                                
-- stdout --
	* Profile "addons-812729" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-812729"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-812729
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-812729: exit status 85 (75.559955ms)

                                                
                                                
-- stdout --
	* Profile "addons-812729" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-812729"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (148.24s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-812729 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-812729 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m28.23643697s)
--- PASS: TestAddons/Setup (148.24s)

                                                
                                    
x
+
TestAddons/serial/Volcano (43.2s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:884: volcano-controller stabilized in 72.579622ms
addons_test.go:876: volcano-admission stabilized in 72.691968ms
addons_test.go:868: volcano-scheduler stabilized in 72.878096ms
addons_test.go:890: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-scheduler-799f64f894-nnzmg" [fd1a775e-ee55-4133-90f3-7d591a6d62af] Running
addons_test.go:890: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.009267774s
addons_test.go:894: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-admission-589c7dd587-p2tz2" [713df1d4-f667-4000-94c7-1d34d75dab99] Running
addons_test.go:894: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 6.005227577s
addons_test.go:898: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-controllers-7dc6969b45-d9nm9" [94b33fec-c586-4f73-a623-75719f7da6bb] Running
addons_test.go:898: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.00392788s
addons_test.go:903: (dbg) Run:  kubectl --context addons-812729 delete -n volcano-system job volcano-admission-init
addons_test.go:909: (dbg) Run:  kubectl --context addons-812729 create -f testdata/vcjob.yaml
addons_test.go:917: (dbg) Run:  kubectl --context addons-812729 get vcjob -n my-volcano
addons_test.go:935: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:352: "test-job-nginx-0" [61cbb0fc-a488-49fd-87bd-ab10adfd4ea7] Pending
helpers_test.go:352: "test-job-nginx-0" [61cbb0fc-a488-49fd-87bd-ab10adfd4ea7] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "test-job-nginx-0" [61cbb0fc-a488-49fd-87bd-ab10adfd4ea7] Running
addons_test.go:935: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 14.002866108s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-812729 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-812729 addons disable volcano --alsologtostderr -v=1: (11.547005269s)
--- PASS: TestAddons/serial/Volcano (43.20s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-812729 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-812729 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.9s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-812729 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-812729 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [137a5015-e943-49dd-b5af-c8919b1c05e7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [137a5015-e943-49dd-b5af-c8919b1c05e7] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.003619257s
addons_test.go:694: (dbg) Run:  kubectl --context addons-812729 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-812729 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-812729 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-812729 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.90s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.28s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 5.253834ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-sbbcm" [d941c3a8-75ee-4ac2-a26a-efeab9f5ba16] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.00554297s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-pmbtb" [c224087f-b7ae-497a-b758-7aab1ce2da1e] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003602268s
addons_test.go:392: (dbg) Run:  kubectl --context addons-812729 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-812729 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-812729 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.246144001s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-812729 ip
2025/09/08 12:19:56 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-812729 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.28s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.95s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 2.795317ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-812729
addons_test.go:332: (dbg) Run:  kubectl --context addons-812729 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-812729 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.95s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (21.38s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-812729 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-812729 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-812729 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [48d4df72-d940-464c-8d60-6c64503a99ec] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [48d4df72-d940-464c-8d60-6c64503a99ec] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003314112s
I0908 12:21:14.069822  274796 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-812729 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-812729 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-812729 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-812729 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-812729 addons disable ingress-dns --alsologtostderr -v=1: (1.746488429s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-812729 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-812729 addons disable ingress --alsologtostderr -v=1: (7.752377723s)
--- PASS: TestAddons/parallel/Ingress (21.38s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.22s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-79ks5" [e581653b-d1e8-4a5f-bcd0-51c393b629b9] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003916531s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-812729 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (6.22s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (7.19s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.485993ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-x2bb6" [a0082b65-eca1-43f1-b41e-207eb7eda23b] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004161841s
addons_test.go:463: (dbg) Run:  kubectl --context addons-812729 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-812729 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-812729 addons disable metrics-server --alsologtostderr -v=1: (1.053686481s)
--- PASS: TestAddons/parallel/MetricsServer (7.19s)

                                                
                                    
x
+
TestAddons/parallel/CSI (41.48s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0908 12:20:22.121593  274796 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0908 12:20:22.126055  274796 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0908 12:20:22.126083  274796 kapi.go:107] duration metric: took 8.231274ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 8.24277ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-812729 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-812729 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-812729 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-812729 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-812729 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-812729 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-812729 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-812729 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-812729 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [756cdcfa-f229-478a-8402-d52b791b615e] Pending
helpers_test.go:352: "task-pv-pod" [756cdcfa-f229-478a-8402-d52b791b615e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [756cdcfa-f229-478a-8402-d52b791b615e] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.003550032s
addons_test.go:572: (dbg) Run:  kubectl --context addons-812729 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-812729 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-812729 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-812729 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-812729 delete pod task-pv-pod: (1.016497794s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-812729 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-812729 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-812729 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-812729 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-812729 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-812729 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-812729 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-812729 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-812729 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-812729 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-812729 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-812729 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [6094bb25-28c8-41ec-9023-0f61ef802786] Pending
helpers_test.go:352: "task-pv-pod-restore" [6094bb25-28c8-41ec-9023-0f61ef802786] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [6094bb25-28c8-41ec-9023-0f61ef802786] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003449076s
addons_test.go:614: (dbg) Run:  kubectl --context addons-812729 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-812729 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-812729 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-812729 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-812729 addons disable volumesnapshots --alsologtostderr -v=1: (1.032239909s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-812729 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-812729 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.97979019s)
--- PASS: TestAddons/parallel/CSI (41.48s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.85s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-812729 --alsologtostderr -v=1
addons_test.go:808: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-812729 --alsologtostderr -v=1: (1.13084578s)
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-85f8f8dc54-6vm7n" [a8c8d749-22d2-4d71-a647-c57b6820d4dd] Pending
helpers_test.go:352: "headlamp-85f8f8dc54-6vm7n" [a8c8d749-22d2-4d71-a647-c57b6820d4dd] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-85f8f8dc54-6vm7n" [a8c8d749-22d2-4d71-a647-c57b6820d4dd] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.004356059s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-812729 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-812729 addons disable headlamp --alsologtostderr -v=1: (5.713255993s)
--- PASS: TestAddons/parallel/Headlamp (17.85s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.6s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-c55d4cb6d-s4nl5" [492f8852-26b8-4b4a-99f4-442d34e3b3c8] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.00352374s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-812729 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.60s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.43s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-812729 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-812729 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-812729 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-812729 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-812729 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-812729 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-812729 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [9c892815-109e-4897-8415-610c41a9b02f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [9c892815-109e-4897-8415-610c41a9b02f] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [9c892815-109e-4897-8415-610c41a9b02f] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003497542s
addons_test.go:967: (dbg) Run:  kubectl --context addons-812729 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-812729 ssh "cat /opt/local-path-provisioner/pvc-efb7f650-3214-41cb-8953-06eec1f8d44e_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-812729 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-812729 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-812729 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-812729 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.06471901s)
--- PASS: TestAddons/parallel/LocalPath (52.43s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.66s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-jlqfs" [21f89f9c-e492-4254-8522-6be3d54be2fc] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.023819721s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-812729 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.66s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.77s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-5fpcj" [057b4f0e-8475-4d01-a9d4-ddb68775809e] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.002754882s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-812729 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-812729 addons disable yakd --alsologtostderr -v=1: (5.764077667s)
--- PASS: TestAddons/parallel/Yakd (11.77s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.2s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-812729
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-812729: (10.928271973s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-812729
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-812729
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-812729
--- PASS: TestAddons/StoppedEnableDisable (11.20s)

                                                
                                    
x
+
TestCertOptions (44.39s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-510108 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-510108 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (41.558664153s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-510108 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-510108 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-510108 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-510108" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-510108
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-510108: (2.168441481s)
--- PASS: TestCertOptions (44.39s)

                                                
                                    
x
+
TestCertExpiration (264.4s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-259311 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=docker
E0908 13:13:22.078400  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/addons-812729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-259311 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=docker: (38.354059453s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-259311 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=docker
E0908 13:17:04.087021  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/skaffold-520822/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-259311 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (43.410190609s)
helpers_test.go:175: Cleaning up "cert-expiration-259311" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-259311
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-259311: (2.634994494s)
--- PASS: TestCertExpiration (264.40s)

                                                
                                    
x
+
TestDockerFlags (49.78s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-arm64 start -p docker-flags-281717 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:51: (dbg) Done: out/minikube-linux-arm64 start -p docker-flags-281717 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (46.783053593s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-281717 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-281717 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-281717" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-flags-281717
E0908 13:13:38.998344  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/addons-812729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-flags-281717: (2.199093472s)
--- PASS: TestDockerFlags (49.78s)

                                                
                                    
x
+
TestForceSystemdFlag (42.48s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-821911 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-821911 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (39.565748174s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-821911 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-821911" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-821911
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-821911: (2.276489133s)
--- PASS: TestForceSystemdFlag (42.48s)

                                                
                                    
x
+
TestForceSystemdEnv (43.61s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-848361 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-848361 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (40.699873825s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-848361 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-848361" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-848361
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-848361: (2.460064875s)
--- PASS: TestForceSystemdEnv (43.61s)

                                                
                                    
x
+
TestErrorSpam/setup (31.74s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-404388 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-404388 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-404388 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-404388 --driver=docker  --container-runtime=docker: (31.739401769s)
--- PASS: TestErrorSpam/setup (31.74s)

                                                
                                    
x
+
TestErrorSpam/start (0.8s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-404388 --log_dir /tmp/nospam-404388 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-404388 --log_dir /tmp/nospam-404388 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-404388 --log_dir /tmp/nospam-404388 start --dry-run
--- PASS: TestErrorSpam/start (0.80s)

                                                
                                    
x
+
TestErrorSpam/status (1.06s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-404388 --log_dir /tmp/nospam-404388 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-404388 --log_dir /tmp/nospam-404388 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-404388 --log_dir /tmp/nospam-404388 status
--- PASS: TestErrorSpam/status (1.06s)

                                                
                                    
x
+
TestErrorSpam/pause (1.44s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-404388 --log_dir /tmp/nospam-404388 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-404388 --log_dir /tmp/nospam-404388 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-404388 --log_dir /tmp/nospam-404388 pause
--- PASS: TestErrorSpam/pause (1.44s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.54s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-404388 --log_dir /tmp/nospam-404388 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-404388 --log_dir /tmp/nospam-404388 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-404388 --log_dir /tmp/nospam-404388 unpause
--- PASS: TestErrorSpam/unpause (1.54s)

                                                
                                    
x
+
TestErrorSpam/stop (10.96s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-404388 --log_dir /tmp/nospam-404388 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-404388 --log_dir /tmp/nospam-404388 stop: (10.75187467s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-404388 --log_dir /tmp/nospam-404388 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-404388 --log_dir /tmp/nospam-404388 stop
--- PASS: TestErrorSpam/stop (10.96s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21508-272936/.minikube/files/etc/test/nested/copy/274796/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (73.92s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-140475 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
E0908 12:23:39.004477  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/addons-812729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:23:39.011792  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/addons-812729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:23:39.023139  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/addons-812729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:23:39.044529  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/addons-812729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:23:39.085900  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/addons-812729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:23:39.167370  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/addons-812729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:23:39.328857  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/addons-812729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:23:39.650613  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/addons-812729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:23:40.292623  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/addons-812729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:23:41.574647  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/addons-812729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-140475 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (1m13.922986602s)
--- PASS: TestFunctional/serial/StartWithProxy (73.92s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (50.01s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0908 12:23:42.245947  274796 config.go:182] Loaded profile config "functional-140475": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-140475 --alsologtostderr -v=8
E0908 12:23:44.136261  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/addons-812729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:23:49.257573  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/addons-812729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:23:59.499270  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/addons-812729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:24:19.981417  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/addons-812729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-140475 --alsologtostderr -v=8: (50.006968373s)
functional_test.go:678: soft start took 50.009316703s for "functional-140475" cluster.
I0908 12:24:32.253377  274796 config.go:182] Loaded profile config "functional-140475": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/SoftStart (50.01s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-140475 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.86s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-140475 cache add registry.k8s.io/pause:3.3: (1.014242115s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.86s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-140475 /tmp/TestFunctionalserialCacheCmdcacheadd_local720884094/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 cache add minikube-local-cache-test:functional-140475
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 cache delete minikube-local-cache-test:functional-140475
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-140475
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.58s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-140475 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (300.16808ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.58s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 kubectl -- --context functional-140475 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-140475 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (54.8s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-140475 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0908 12:25:00.944105  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/addons-812729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-140475 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (54.795071089s)
functional_test.go:776: restart took 54.795177142s for "functional-140475" cluster.
I0908 12:25:33.539055  274796 config.go:182] Loaded profile config "functional-140475": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/ExtraConfig (54.80s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-140475 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.26s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-140475 logs: (1.260390745s)
--- PASS: TestFunctional/serial/LogsCmd (1.26s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.28s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 logs --file /tmp/TestFunctionalserialLogsFileCmd735634963/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-140475 logs --file /tmp/TestFunctionalserialLogsFileCmd735634963/001/logs.txt: (1.282094613s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.28s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.95s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-140475 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-140475
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-140475: exit status 115 (722.191708ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31679 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-140475 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.95s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-140475 config get cpus: exit status 14 (77.805882ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-140475 config get cpus: exit status 14 (74.727417ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-140475 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-140475 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (265.174464ms)

                                                
                                                
-- stdout --
	* [functional-140475] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21508
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21508-272936/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-272936/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 12:35:57.324636  318544 out.go:360] Setting OutFile to fd 1 ...
	I0908 12:35:57.324808  318544 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:35:57.324821  318544 out.go:374] Setting ErrFile to fd 2...
	I0908 12:35:57.324828  318544 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:35:57.325123  318544 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-272936/.minikube/bin
	I0908 12:35:57.325709  318544 out.go:368] Setting JSON to false
	I0908 12:35:57.326972  318544 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8308,"bootTime":1757326650,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0908 12:35:57.327052  318544 start.go:140] virtualization:  
	I0908 12:35:57.332423  318544 out.go:179] * [functional-140475] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	I0908 12:35:57.335411  318544 out.go:179]   - MINIKUBE_LOCATION=21508
	I0908 12:35:57.335591  318544 notify.go:220] Checking for updates...
	I0908 12:35:57.341315  318544 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 12:35:57.344264  318544 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21508-272936/kubeconfig
	I0908 12:35:57.347206  318544 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-272936/.minikube
	I0908 12:35:57.350116  318544 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0908 12:35:57.352982  318544 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 12:35:57.356342  318544 config.go:182] Loaded profile config "functional-140475": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 12:35:57.357111  318544 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 12:35:57.401603  318544 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 12:35:57.401720  318544 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 12:35:57.493608  318544 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-09-08 12:35:57.48232234 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 12:35:57.493711  318544 docker.go:318] overlay module found
	I0908 12:35:57.497689  318544 out.go:179] * Using the docker driver based on existing profile
	I0908 12:35:57.500578  318544 start.go:304] selected driver: docker
	I0908 12:35:57.500594  318544 start.go:918] validating driver "docker" against &{Name:functional-140475 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-140475 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 12:35:57.500695  318544 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 12:35:57.504289  318544 out.go:203] 
	W0908 12:35:57.507115  318544 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0908 12:35:57.510710  318544 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-140475 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-140475 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-140475 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (278.035092ms)

                                                
                                                
-- stdout --
	* [functional-140475] minikube v1.36.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21508
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21508-272936/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-272936/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 12:35:57.069224  318462 out.go:360] Setting OutFile to fd 1 ...
	I0908 12:35:57.069545  318462 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:35:57.069561  318462 out.go:374] Setting ErrFile to fd 2...
	I0908 12:35:57.069566  318462 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:35:57.072153  318462 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-272936/.minikube/bin
	I0908 12:35:57.072601  318462 out.go:368] Setting JSON to false
	I0908 12:35:57.073580  318462 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8307,"bootTime":1757326650,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0908 12:35:57.073663  318462 start.go:140] virtualization:  
	I0908 12:35:57.077267  318462 out.go:179] * [functional-140475] minikube v1.36.0 sur Ubuntu 20.04 (arm64)
	I0908 12:35:57.080208  318462 out.go:179]   - MINIKUBE_LOCATION=21508
	I0908 12:35:57.080400  318462 notify.go:220] Checking for updates...
	I0908 12:35:57.086907  318462 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 12:35:57.091169  318462 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21508-272936/kubeconfig
	I0908 12:35:57.094396  318462 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-272936/.minikube
	I0908 12:35:57.097303  318462 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0908 12:35:57.100225  318462 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 12:35:57.103581  318462 config.go:182] Loaded profile config "functional-140475": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 12:35:57.104388  318462 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 12:35:57.137172  318462 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 12:35:57.137286  318462 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 12:35:57.226128  318462 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-09-08 12:35:57.211163254 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 12:35:57.226229  318462 docker.go:318] overlay module found
	I0908 12:35:57.229379  318462 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I0908 12:35:57.232256  318462 start.go:304] selected driver: docker
	I0908 12:35:57.232276  318462 start.go:918] validating driver "docker" against &{Name:functional-140475 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-140475 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 12:35:57.232386  318462 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 12:35:57.235785  318462 out.go:203] 
	W0908 12:35:57.238681  318462 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0908 12:35:57.241532  318462 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 ssh -n functional-140475 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 cp functional-140475:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2204074601/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 ssh -n functional-140475 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 ssh -n functional-140475 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.31s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/274796/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 ssh "sudo cat /etc/test/nested/copy/274796/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/274796.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 ssh "sudo cat /etc/ssl/certs/274796.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/274796.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 ssh "sudo cat /usr/share/ca-certificates/274796.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/2747962.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 ssh "sudo cat /etc/ssl/certs/2747962.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/2747962.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 ssh "sudo cat /usr/share/ca-certificates/2747962.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.62s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-140475 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-140475 ssh "sudo systemctl is-active crio": exit status 1 (288.224241ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-140475 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-140475 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-140475 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 312878: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-140475 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-140475 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-140475 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [291ad5a3-fa5a-4402-b46e-80bb280360bf] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [291ad5a3-fa5a-4402-b46e-80bb280360bf] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.003406859s
I0908 12:25:51.509906  274796 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.45s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-140475 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.100.95.33 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-140475 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (351.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-140475 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-140475 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-x22bh" [81b0695c-6ef6-40b1-a20f-69e254ab61f4] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E0908 12:33:38.998227  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/addons-812729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "hello-node-75c85bcc94-x22bh" [81b0695c-6ef6-40b1-a20f-69e254ab61f4] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 5m51.003037298s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (351.21s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 service list -o json
functional_test.go:1504: Took "524.531149ms" to run "out/minikube-linux-arm64 -p functional-140475 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:32007
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:32007
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "374.437452ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "63.874867ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "352.18979ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "55.700343ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-140475 /tmp/TestFunctionalparallelMountCmdany-port2268302564/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1757334947499765445" to /tmp/TestFunctionalparallelMountCmdany-port2268302564/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1757334947499765445" to /tmp/TestFunctionalparallelMountCmdany-port2268302564/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1757334947499765445" to /tmp/TestFunctionalparallelMountCmdany-port2268302564/001/test-1757334947499765445
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-140475 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (366.722437ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0908 12:35:47.867773  274796 retry.go:31] will retry after 708.935347ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep  8 12:35 created-by-test
-rw-r--r-- 1 docker docker 24 Sep  8 12:35 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep  8 12:35 test-1757334947499765445
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 ssh cat /mount-9p/test-1757334947499765445
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-140475 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [c71cd365-d63f-4c48-bf22-e8b326c5908e] Pending
helpers_test.go:352: "busybox-mount" [c71cd365-d63f-4c48-bf22-e8b326c5908e] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [c71cd365-d63f-4c48-bf22-e8b326c5908e] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [c71cd365-d63f-4c48-bf22-e8b326c5908e] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.00376892s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-140475 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-140475 /tmp/TestFunctionalparallelMountCmdany-port2268302564/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.56s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-140475 /tmp/TestFunctionalparallelMountCmdspecific-port2132637847/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-140475 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (452.14257ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0908 12:35:56.513506  274796 retry.go:31] will retry after 255.775236ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-140475 /tmp/TestFunctionalparallelMountCmdspecific-port2132637847/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-140475 ssh "sudo umount -f /mount-9p": exit status 1 (367.535389ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-140475 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-140475 /tmp/TestFunctionalparallelMountCmdspecific-port2132637847/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.12s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-140475 /tmp/TestFunctionalparallelMountCmdVerifyCleanup816082633/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-140475 /tmp/TestFunctionalparallelMountCmdVerifyCleanup816082633/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-140475 /tmp/TestFunctionalparallelMountCmdVerifyCleanup816082633/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-140475 ssh "findmnt -T" /mount1: exit status 1 (806.164919ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0908 12:35:58.991653  274796 retry.go:31] will retry after 303.162729ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-140475 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-140475 /tmp/TestFunctionalparallelMountCmdVerifyCleanup816082633/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-140475 /tmp/TestFunctionalparallelMountCmdVerifyCleanup816082633/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-140475 /tmp/TestFunctionalparallelMountCmdVerifyCleanup816082633/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.59s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-140475 version -o=json --components: (1.026378106s)
--- PASS: TestFunctional/parallel/Version/components (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-140475 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.0
registry.k8s.io/kube-proxy:v1.34.0
registry.k8s.io/kube-controller-manager:v1.34.0
registry.k8s.io/kube-apiserver:v1.34.0
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-140475
docker.io/kicbase/echo-server:latest
docker.io/kicbase/echo-server:functional-140475
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-140475 image ls --format short --alsologtostderr:
I0908 12:36:10.901348  320951 out.go:360] Setting OutFile to fd 1 ...
I0908 12:36:10.902635  320951 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 12:36:10.902660  320951 out.go:374] Setting ErrFile to fd 2...
I0908 12:36:10.902696  320951 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 12:36:10.903068  320951 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-272936/.minikube/bin
I0908 12:36:10.903939  320951 config.go:182] Loaded profile config "functional-140475": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0908 12:36:10.904189  320951 config.go:182] Loaded profile config "functional-140475": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0908 12:36:10.904794  320951 cli_runner.go:164] Run: docker container inspect functional-140475 --format={{.State.Status}}
I0908 12:36:10.943467  320951 ssh_runner.go:195] Run: systemctl --version
I0908 12:36:10.943523  320951 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-140475
I0908 12:36:10.967324  320951 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21508-272936/.minikube/machines/functional-140475/id_rsa Username:docker}
I0908 12:36:11.057340  320951 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-140475 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬───────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG        │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼───────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-controller-manager     │ v1.34.0           │ 996be7e86d9b3 │ 71.5MB │
│ docker.io/library/nginx                     │ alpine            │ 35f3cbee4fb77 │ 52.9MB │
│ registry.k8s.io/pause                       │ 3.1               │ 8057e0500773a │ 525kB  │
│ registry.k8s.io/kube-proxy                  │ v1.34.0           │ 6fc32d66c1411 │ 74.7MB │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc      │ 1611cd07b61d5 │ 3.55MB │
│ registry.k8s.io/pause                       │ latest            │ 8cb2091f603e7 │ 240kB  │
│ registry.k8s.io/coredns/coredns             │ v1.12.1           │ 138784d87c9c5 │ 72.1MB │
│ registry.k8s.io/kube-scheduler              │ v1.34.0           │ a25f5ef9c34c3 │ 50.5MB │
│ registry.k8s.io/etcd                        │ 3.6.4-0           │ a1894772a478e │ 205MB  │
│ registry.k8s.io/pause                       │ 3.10.1            │ d7b100cd9a77b │ 514kB  │
│ docker.io/kicbase/echo-server               │ functional-140475 │ ce2d2cda2d858 │ 4.78MB │
│ docker.io/kicbase/echo-server               │ latest            │ ce2d2cda2d858 │ 4.78MB │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                │ ba04bb24b9575 │ 29MB   │
│ registry.k8s.io/pause                       │ 3.3               │ 3d18732f8686c │ 484kB  │
│ localhost/my-image                          │ functional-140475 │ 326c51db66b12 │ 1.41MB │
│ docker.io/library/minikube-local-cache-test │ functional-140475 │ 22aec9141cfaa │ 30B    │
│ registry.k8s.io/kube-apiserver              │ v1.34.0           │ d291939e99406 │ 83.7MB │
└─────────────────────────────────────────────┴───────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-140475 image ls --format table --alsologtostderr:
I0908 12:36:15.230962  321320 out.go:360] Setting OutFile to fd 1 ...
I0908 12:36:15.231375  321320 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 12:36:15.231389  321320 out.go:374] Setting ErrFile to fd 2...
I0908 12:36:15.231395  321320 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 12:36:15.236196  321320 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-272936/.minikube/bin
I0908 12:36:15.237343  321320 config.go:182] Loaded profile config "functional-140475": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0908 12:36:15.237557  321320 config.go:182] Loaded profile config "functional-140475": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0908 12:36:15.240603  321320 cli_runner.go:164] Run: docker container inspect functional-140475 --format={{.State.Status}}
I0908 12:36:15.267310  321320 ssh_runner.go:195] Run: systemctl --version
I0908 12:36:15.267371  321320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-140475
I0908 12:36:15.288656  321320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21508-272936/.minikube/machines/functional-140475/id_rsa Username:docker}
I0908 12:36:15.377466  321320 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-140475 image ls --format json --alsologtostderr:
[{"id":"a25f5ef9c34c37c649f3b4f9631a169221ac2d6f41d9767c7588cd355f76f9ee","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.0"],"size":"50500000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"514000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-140475","docker.io/kicbase/echo-server:latest"],"size":"4780000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"996be7e86d9b3a549d718de6
3713d9fea9db1f45ac44863a6770292d7b463570","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.0"],"size":"71500000"},{"id":"35f3cbee4fb77c3efb39f2723a21ce181906139442a37de8ffc52d89641d9936","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"52900000"},{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"72100000"},{"id":"22aec9141cfaa2ebb5892712e2a3ea2c028bf9f33b29f425df3430f6980b85ff","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-140475"],"size":"30"},{"id":"d291939e994064911484215449d0ab96c535b02adc4fc5d0ad4e438cf71465be","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.0"],"size":"83700000"},{"id":"6fc32d66c141152245438e6512df788cb52d64a1617e33561950b0e7a4675abf","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.34.0"],"size":"74700000"},{"id":"a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a598708316
4bd00bc0e","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"205000000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"326c51db66b127af252c3539b30528353bdac22977199e4e3106b81ddf6cb7bf","repoDigests":[],"repoTags":["localhost/my-image:functional-140475"],"size":"1410000"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-140475 image ls --format json --alsologtostderr:
I0908 12:36:14.979686  321289 out.go:360] Setting OutFile to fd 1 ...
I0908 12:36:14.979820  321289 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 12:36:14.979826  321289 out.go:374] Setting ErrFile to fd 2...
I0908 12:36:14.979831  321289 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 12:36:14.980154  321289 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-272936/.minikube/bin
I0908 12:36:14.980816  321289 config.go:182] Loaded profile config "functional-140475": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0908 12:36:14.980941  321289 config.go:182] Loaded profile config "functional-140475": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0908 12:36:14.981973  321289 cli_runner.go:164] Run: docker container inspect functional-140475 --format={{.State.Status}}
I0908 12:36:14.999320  321289 ssh_runner.go:195] Run: systemctl --version
I0908 12:36:14.999385  321289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-140475
I0908 12:36:15.035835  321289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21508-272936/.minikube/machines/functional-140475/id_rsa Username:docker}
I0908 12:36:15.129791  321289 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-140475 image ls --format yaml --alsologtostderr:
- id: 326c51db66b127af252c3539b30528353bdac22977199e4e3106b81ddf6cb7bf
repoDigests: []
repoTags:
- localhost/my-image:functional-140475
size: "1410000"
- id: 996be7e86d9b3a549d718de63713d9fea9db1f45ac44863a6770292d7b463570
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.0
size: "71500000"
- id: 6fc32d66c141152245438e6512df788cb52d64a1617e33561950b0e7a4675abf
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.34.0
size: "74700000"
- id: 35f3cbee4fb77c3efb39f2723a21ce181906139442a37de8ffc52d89641d9936
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "52900000"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "72100000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: a25f5ef9c34c37c649f3b4f9631a169221ac2d6f41d9767c7588cd355f76f9ee
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.0
size: "50500000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-140475
- docker.io/kicbase/echo-server:latest
size: "4780000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 22aec9141cfaa2ebb5892712e2a3ea2c028bf9f33b29f425df3430f6980b85ff
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-140475
size: "30"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10.1
size: "514000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: d291939e994064911484215449d0ab96c535b02adc4fc5d0ad4e438cf71465be
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.0
size: "83700000"
- id: a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "205000000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-140475 image ls --format yaml --alsologtostderr:
I0908 12:36:14.750519  321242 out.go:360] Setting OutFile to fd 1 ...
I0908 12:36:14.750689  321242 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 12:36:14.750719  321242 out.go:374] Setting ErrFile to fd 2...
I0908 12:36:14.750743  321242 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 12:36:14.751052  321242 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-272936/.minikube/bin
I0908 12:36:14.751860  321242 config.go:182] Loaded profile config "functional-140475": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0908 12:36:14.752281  321242 config.go:182] Loaded profile config "functional-140475": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0908 12:36:14.752777  321242 cli_runner.go:164] Run: docker container inspect functional-140475 --format={{.State.Status}}
I0908 12:36:14.775486  321242 ssh_runner.go:195] Run: systemctl --version
I0908 12:36:14.775537  321242 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-140475
I0908 12:36:14.793413  321242 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21508-272936/.minikube/machines/functional-140475/id_rsa Username:docker}
I0908 12:36:14.880554  321242 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-140475 ssh pgrep buildkitd: exit status 1 (269.57625ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 image build -t localhost/my-image:functional-140475 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-140475 image build -t localhost/my-image:functional-140475 testdata/build --alsologtostderr: (2.966937778s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-140475 image build -t localhost/my-image:functional-140475 testdata/build --alsologtostderr:
I0908 12:36:11.554076  321077 out.go:360] Setting OutFile to fd 1 ...
I0908 12:36:11.554806  321077 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 12:36:11.554827  321077 out.go:374] Setting ErrFile to fd 2...
I0908 12:36:11.554833  321077 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 12:36:11.555152  321077 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-272936/.minikube/bin
I0908 12:36:11.555842  321077 config.go:182] Loaded profile config "functional-140475": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0908 12:36:11.557733  321077 config.go:182] Loaded profile config "functional-140475": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0908 12:36:11.558274  321077 cli_runner.go:164] Run: docker container inspect functional-140475 --format={{.State.Status}}
I0908 12:36:11.578949  321077 ssh_runner.go:195] Run: systemctl --version
I0908 12:36:11.579004  321077 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-140475
I0908 12:36:11.599173  321077 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21508-272936/.minikube/machines/functional-140475/id_rsa Username:docker}
I0908 12:36:11.688508  321077 build_images.go:161] Building image from path: /tmp/build.689874679.tar
I0908 12:36:11.688652  321077 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0908 12:36:11.697256  321077 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.689874679.tar
I0908 12:36:11.700871  321077 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.689874679.tar: stat -c "%s %y" /var/lib/minikube/build/build.689874679.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.689874679.tar': No such file or directory
I0908 12:36:11.700903  321077 ssh_runner.go:362] scp /tmp/build.689874679.tar --> /var/lib/minikube/build/build.689874679.tar (3072 bytes)
I0908 12:36:11.726512  321077 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.689874679
I0908 12:36:11.735741  321077 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.689874679 -xf /var/lib/minikube/build/build.689874679.tar
I0908 12:36:11.745392  321077 docker.go:361] Building image: /var/lib/minikube/build/build.689874679
I0908 12:36:11.745517  321077 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-140475 /var/lib/minikube/build/build.689874679
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.4s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 writing image sha256:326c51db66b127af252c3539b30528353bdac22977199e4e3106b81ddf6cb7bf done
#8 naming to localhost/my-image:functional-140475 done
#8 DONE 0.1s
I0908 12:36:14.444562  321077 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-140475 /var/lib/minikube/build/build.689874679: (2.6990152s)
I0908 12:36:14.444659  321077 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.689874679
I0908 12:36:14.454754  321077 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.689874679.tar
I0908 12:36:14.463766  321077 build_images.go:217] Built localhost/my-image:functional-140475 from /tmp/build.689874679.tar
I0908 12:36:14.463808  321077 build_images.go:133] succeeded building to: functional-140475
I0908 12:36:14.463822  321077 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-140475
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 image load --daemon kicbase/echo-server:functional-140475 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 image load --daemon kicbase/echo-server:functional-140475 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-140475
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 image load --daemon kicbase/echo-server:functional-140475 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 image save kicbase/echo-server:functional-140475 /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 image rm kicbase/echo-server:functional-140475 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 image load /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-140475
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 image save --daemon kicbase/echo-server:functional-140475 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-140475
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 update-context --alsologtostderr -v=2
E0908 12:38:38.998399  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/addons-812729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:40:02.073491  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/addons-812729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-140475 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:514: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-140475 docker-env) && out/minikube-linux-arm64 status -p functional-140475"
functional_test.go:537: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-140475 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.04s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.11s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-140475
--- PASS: TestFunctional/delete_echo-server_images (0.11s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-140475
--- PASS: TestFunctional/delete_my-image_image (0.07s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-140475
--- PASS: TestFunctional/delete_minikube_cached_images (0.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (140.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-710557 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker: (2m19.592891342s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (140.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (41.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-710557 kubectl -- rollout status deployment/busybox: (4.381241119s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I0908 12:43:27.967624  274796 retry.go:31] will retry after 907.72502ms: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I0908 12:43:29.073239  274796 retry.go:31] will retry after 2.235605984s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I0908 12:43:31.461397  274796 retry.go:31] will retry after 2.630855373s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I0908 12:43:34.265245  274796 retry.go:31] will retry after 1.762182236s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I0908 12:43:36.197205  274796 retry.go:31] will retry after 7.372780332s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
E0908 12:43:38.998538  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/addons-812729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I0908 12:43:43.745791  274796 retry.go:31] will retry after 6.057154861s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I0908 12:43:49.998275  274796 retry.go:31] will retry after 12.143639671s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 kubectl -- exec busybox-7b57f96db7-h4hmh -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 kubectl -- exec busybox-7b57f96db7-szj2m -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 kubectl -- exec busybox-7b57f96db7-xtznq -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 kubectl -- exec busybox-7b57f96db7-h4hmh -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 kubectl -- exec busybox-7b57f96db7-szj2m -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 kubectl -- exec busybox-7b57f96db7-xtznq -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 kubectl -- exec busybox-7b57f96db7-h4hmh -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 kubectl -- exec busybox-7b57f96db7-szj2m -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 kubectl -- exec busybox-7b57f96db7-xtznq -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (41.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 kubectl -- exec busybox-7b57f96db7-h4hmh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 kubectl -- exec busybox-7b57f96db7-h4hmh -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 kubectl -- exec busybox-7b57f96db7-szj2m -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 kubectl -- exec busybox-7b57f96db7-szj2m -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 kubectl -- exec busybox-7b57f96db7-xtznq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 kubectl -- exec busybox-7b57f96db7-xtznq -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (20.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-710557 node add --alsologtostderr -v 5: (18.964104765s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-710557 status --alsologtostderr -v 5: (1.807795653s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (20.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-710557 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.473789347s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-710557 status --output json --alsologtostderr -v 5: (1.402717246s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 cp testdata/cp-test.txt ha-710557:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 ssh -n ha-710557 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 cp ha-710557:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3610736816/001/cp-test_ha-710557.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 ssh -n ha-710557 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 cp ha-710557:/home/docker/cp-test.txt ha-710557-m02:/home/docker/cp-test_ha-710557_ha-710557-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 ssh -n ha-710557 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 ssh -n ha-710557-m02 "sudo cat /home/docker/cp-test_ha-710557_ha-710557-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 cp ha-710557:/home/docker/cp-test.txt ha-710557-m03:/home/docker/cp-test_ha-710557_ha-710557-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 ssh -n ha-710557 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 ssh -n ha-710557-m03 "sudo cat /home/docker/cp-test_ha-710557_ha-710557-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 cp ha-710557:/home/docker/cp-test.txt ha-710557-m04:/home/docker/cp-test_ha-710557_ha-710557-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 ssh -n ha-710557 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 ssh -n ha-710557-m04 "sudo cat /home/docker/cp-test_ha-710557_ha-710557-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 cp testdata/cp-test.txt ha-710557-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 ssh -n ha-710557-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 cp ha-710557-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3610736816/001/cp-test_ha-710557-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 ssh -n ha-710557-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 cp ha-710557-m02:/home/docker/cp-test.txt ha-710557:/home/docker/cp-test_ha-710557-m02_ha-710557.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 ssh -n ha-710557-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 ssh -n ha-710557 "sudo cat /home/docker/cp-test_ha-710557-m02_ha-710557.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 cp ha-710557-m02:/home/docker/cp-test.txt ha-710557-m03:/home/docker/cp-test_ha-710557-m02_ha-710557-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 ssh -n ha-710557-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 ssh -n ha-710557-m03 "sudo cat /home/docker/cp-test_ha-710557-m02_ha-710557-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 cp ha-710557-m02:/home/docker/cp-test.txt ha-710557-m04:/home/docker/cp-test_ha-710557-m02_ha-710557-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 ssh -n ha-710557-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 ssh -n ha-710557-m04 "sudo cat /home/docker/cp-test_ha-710557-m02_ha-710557-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 cp testdata/cp-test.txt ha-710557-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 ssh -n ha-710557-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 cp ha-710557-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3610736816/001/cp-test_ha-710557-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 ssh -n ha-710557-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 cp ha-710557-m03:/home/docker/cp-test.txt ha-710557:/home/docker/cp-test_ha-710557-m03_ha-710557.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 ssh -n ha-710557-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 ssh -n ha-710557 "sudo cat /home/docker/cp-test_ha-710557-m03_ha-710557.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 cp ha-710557-m03:/home/docker/cp-test.txt ha-710557-m02:/home/docker/cp-test_ha-710557-m03_ha-710557-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 ssh -n ha-710557-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 ssh -n ha-710557-m02 "sudo cat /home/docker/cp-test_ha-710557-m03_ha-710557-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 cp ha-710557-m03:/home/docker/cp-test.txt ha-710557-m04:/home/docker/cp-test_ha-710557-m03_ha-710557-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 ssh -n ha-710557-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 ssh -n ha-710557-m04 "sudo cat /home/docker/cp-test_ha-710557-m03_ha-710557-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 cp testdata/cp-test.txt ha-710557-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 ssh -n ha-710557-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 cp ha-710557-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3610736816/001/cp-test_ha-710557-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 ssh -n ha-710557-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 cp ha-710557-m04:/home/docker/cp-test.txt ha-710557:/home/docker/cp-test_ha-710557-m04_ha-710557.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 ssh -n ha-710557-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 ssh -n ha-710557 "sudo cat /home/docker/cp-test_ha-710557-m04_ha-710557.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 cp ha-710557-m04:/home/docker/cp-test.txt ha-710557-m02:/home/docker/cp-test_ha-710557-m04_ha-710557-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 ssh -n ha-710557-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 ssh -n ha-710557-m02 "sudo cat /home/docker/cp-test_ha-710557-m04_ha-710557-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 cp ha-710557-m04:/home/docker/cp-test.txt ha-710557-m03:/home/docker/cp-test_ha-710557-m04_ha-710557-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 ssh -n ha-710557-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 ssh -n ha-710557-m03 "sudo cat /home/docker/cp-test_ha-710557-m04_ha-710557-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-710557 node stop m02 --alsologtostderr -v 5: (11.690780127s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-710557 status --alsologtostderr -v 5: exit status 7 (773.043495ms)

                                                
                                                
-- stdout --
	ha-710557
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-710557-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-710557-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-710557-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 12:45:01.651273  345445 out.go:360] Setting OutFile to fd 1 ...
	I0908 12:45:01.651460  345445 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:45:01.651492  345445 out.go:374] Setting ErrFile to fd 2...
	I0908 12:45:01.651515  345445 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:45:01.651821  345445 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-272936/.minikube/bin
	I0908 12:45:01.652184  345445 out.go:368] Setting JSON to false
	I0908 12:45:01.652304  345445 mustload.go:65] Loading cluster: ha-710557
	I0908 12:45:01.652371  345445 notify.go:220] Checking for updates...
	I0908 12:45:01.653840  345445 config.go:182] Loaded profile config "ha-710557": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 12:45:01.653925  345445 status.go:174] checking status of ha-710557 ...
	I0908 12:45:01.654613  345445 cli_runner.go:164] Run: docker container inspect ha-710557 --format={{.State.Status}}
	I0908 12:45:01.675017  345445 status.go:371] ha-710557 host status = "Running" (err=<nil>)
	I0908 12:45:01.675042  345445 host.go:66] Checking if "ha-710557" exists ...
	I0908 12:45:01.675383  345445 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-710557
	I0908 12:45:01.705253  345445 host.go:66] Checking if "ha-710557" exists ...
	I0908 12:45:01.705629  345445 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 12:45:01.705722  345445 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-710557
	I0908 12:45:01.725081  345445 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21508-272936/.minikube/machines/ha-710557/id_rsa Username:docker}
	I0908 12:45:01.825829  345445 ssh_runner.go:195] Run: systemctl --version
	I0908 12:45:01.831315  345445 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 12:45:01.845152  345445 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 12:45:01.916695  345445 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-09-08 12:45:01.906192093 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 12:45:01.917251  345445 kubeconfig.go:125] found "ha-710557" server: "https://192.168.49.254:8443"
	I0908 12:45:01.917290  345445 api_server.go:166] Checking apiserver status ...
	I0908 12:45:01.917338  345445 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 12:45:01.931763  345445 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2343/cgroup
	I0908 12:45:01.942221  345445 api_server.go:182] apiserver freezer: "12:freezer:/docker/db739a148858e68e8ec6c58c5032fcd885cf1b6798df75981bf3aa912e2707dc/kubepods/burstable/pod2f2afdac1c600e9c856819c18a6f49fb/7f7d021e5fb89efb930f6c9d138dbf6c612525da44f5a46ddde93b50529b7b0e"
	I0908 12:45:01.942291  345445 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/db739a148858e68e8ec6c58c5032fcd885cf1b6798df75981bf3aa912e2707dc/kubepods/burstable/pod2f2afdac1c600e9c856819c18a6f49fb/7f7d021e5fb89efb930f6c9d138dbf6c612525da44f5a46ddde93b50529b7b0e/freezer.state
	I0908 12:45:01.951746  345445 api_server.go:204] freezer state: "THAWED"
	I0908 12:45:01.951779  345445 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0908 12:45:01.961827  345445 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0908 12:45:01.961862  345445 status.go:463] ha-710557 apiserver status = Running (err=<nil>)
	I0908 12:45:01.961873  345445 status.go:176] ha-710557 status: &{Name:ha-710557 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 12:45:01.961892  345445 status.go:174] checking status of ha-710557-m02 ...
	I0908 12:45:01.962228  345445 cli_runner.go:164] Run: docker container inspect ha-710557-m02 --format={{.State.Status}}
	I0908 12:45:01.980607  345445 status.go:371] ha-710557-m02 host status = "Stopped" (err=<nil>)
	I0908 12:45:01.980632  345445 status.go:384] host is not running, skipping remaining checks
	I0908 12:45:01.980639  345445 status.go:176] ha-710557-m02 status: &{Name:ha-710557-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 12:45:01.980661  345445 status.go:174] checking status of ha-710557-m03 ...
	I0908 12:45:01.981084  345445 cli_runner.go:164] Run: docker container inspect ha-710557-m03 --format={{.State.Status}}
	I0908 12:45:02.002530  345445 status.go:371] ha-710557-m03 host status = "Running" (err=<nil>)
	I0908 12:45:02.002557  345445 host.go:66] Checking if "ha-710557-m03" exists ...
	I0908 12:45:02.002886  345445 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-710557-m03
	I0908 12:45:02.023374  345445 host.go:66] Checking if "ha-710557-m03" exists ...
	I0908 12:45:02.023694  345445 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 12:45:02.023734  345445 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-710557-m03
	I0908 12:45:02.051902  345445 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/21508-272936/.minikube/machines/ha-710557-m03/id_rsa Username:docker}
	I0908 12:45:02.141549  345445 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 12:45:02.155247  345445 kubeconfig.go:125] found "ha-710557" server: "https://192.168.49.254:8443"
	I0908 12:45:02.155279  345445 api_server.go:166] Checking apiserver status ...
	I0908 12:45:02.155329  345445 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 12:45:02.167731  345445 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2290/cgroup
	I0908 12:45:02.179942  345445 api_server.go:182] apiserver freezer: "12:freezer:/docker/243e056a55761f6e94feb1290ec021fcc1e6b9d64ff536c80ba46e2df11cf273/kubepods/burstable/pod6a84ca389fc63871b0c1df70bdca5c62/726d38ef79806a484f94982b2a22f44109aef945f4a811cfcf14e98a22ed0e40"
	I0908 12:45:02.180043  345445 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/243e056a55761f6e94feb1290ec021fcc1e6b9d64ff536c80ba46e2df11cf273/kubepods/burstable/pod6a84ca389fc63871b0c1df70bdca5c62/726d38ef79806a484f94982b2a22f44109aef945f4a811cfcf14e98a22ed0e40/freezer.state
	I0908 12:45:02.189755  345445 api_server.go:204] freezer state: "THAWED"
	I0908 12:45:02.189803  345445 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0908 12:45:02.198573  345445 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0908 12:45:02.198601  345445 status.go:463] ha-710557-m03 apiserver status = Running (err=<nil>)
	I0908 12:45:02.198611  345445 status.go:176] ha-710557-m03 status: &{Name:ha-710557-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 12:45:02.198629  345445 status.go:174] checking status of ha-710557-m04 ...
	I0908 12:45:02.198946  345445 cli_runner.go:164] Run: docker container inspect ha-710557-m04 --format={{.State.Status}}
	I0908 12:45:02.217235  345445 status.go:371] ha-710557-m04 host status = "Running" (err=<nil>)
	I0908 12:45:02.217261  345445 host.go:66] Checking if "ha-710557-m04" exists ...
	I0908 12:45:02.217560  345445 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-710557-m04
	I0908 12:45:02.235294  345445 host.go:66] Checking if "ha-710557-m04" exists ...
	I0908 12:45:02.235860  345445 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 12:45:02.235930  345445 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-710557-m04
	I0908 12:45:02.259468  345445 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21508-272936/.minikube/machines/ha-710557-m04/id_rsa Username:docker}
	I0908 12:45:02.350383  345445 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 12:45:02.362600  345445 status.go:176] ha-710557-m04 status: &{Name:ha-710557-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (89.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 node start m02 --alsologtostderr -v 5
E0908 12:45:43.058272  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/functional-140475/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:45:43.064637  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/functional-140475/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:45:43.076397  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/functional-140475/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:45:43.098569  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/functional-140475/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:45:43.140071  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/functional-140475/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:45:43.222314  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/functional-140475/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:45:43.383811  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/functional-140475/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:45:43.705506  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/functional-140475/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:45:44.347823  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/functional-140475/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:45:45.629095  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/functional-140475/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:45:48.190956  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/functional-140475/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:45:53.312721  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/functional-140475/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:46:03.554550  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/functional-140475/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:46:24.036595  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/functional-140475/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-710557 node start m02 --alsologtostderr -v 5: (1m28.504421719s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-710557 status --alsologtostderr -v 5: (1.023057043s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (89.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (184.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 stop --alsologtostderr -v 5
E0908 12:47:04.998341  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/functional-140475/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-710557 stop --alsologtostderr -v 5: (33.717415225s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 start --wait true --alsologtostderr -v 5
E0908 12:48:26.929616  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/functional-140475/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:48:38.998411  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/addons-812729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-710557 start --wait true --alsologtostderr -v 5: (2m30.82286987s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (184.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-710557 node delete m03 --alsologtostderr -v 5: (11.052956271s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (32.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-710557 stop --alsologtostderr -v 5: (32.622126349s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-710557 status --alsologtostderr -v 5: exit status 7 (117.020954ms)

                                                
                                                
-- stdout --
	ha-710557
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-710557-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-710557-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 12:50:23.957404  372746 out.go:360] Setting OutFile to fd 1 ...
	I0908 12:50:23.957515  372746 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:50:23.957527  372746 out.go:374] Setting ErrFile to fd 2...
	I0908 12:50:23.957532  372746 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:50:23.957774  372746 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-272936/.minikube/bin
	I0908 12:50:23.957962  372746 out.go:368] Setting JSON to false
	I0908 12:50:23.958014  372746 mustload.go:65] Loading cluster: ha-710557
	I0908 12:50:23.958090  372746 notify.go:220] Checking for updates...
	I0908 12:50:23.959306  372746 config.go:182] Loaded profile config "ha-710557": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 12:50:23.959342  372746 status.go:174] checking status of ha-710557 ...
	I0908 12:50:23.960107  372746 cli_runner.go:164] Run: docker container inspect ha-710557 --format={{.State.Status}}
	I0908 12:50:23.977542  372746 status.go:371] ha-710557 host status = "Stopped" (err=<nil>)
	I0908 12:50:23.977563  372746 status.go:384] host is not running, skipping remaining checks
	I0908 12:50:23.977570  372746 status.go:176] ha-710557 status: &{Name:ha-710557 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 12:50:23.977612  372746 status.go:174] checking status of ha-710557-m02 ...
	I0908 12:50:23.977915  372746 cli_runner.go:164] Run: docker container inspect ha-710557-m02 --format={{.State.Status}}
	I0908 12:50:24.001257  372746 status.go:371] ha-710557-m02 host status = "Stopped" (err=<nil>)
	I0908 12:50:24.001277  372746 status.go:384] host is not running, skipping remaining checks
	I0908 12:50:24.001285  372746 status.go:176] ha-710557-m02 status: &{Name:ha-710557-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 12:50:24.001313  372746 status.go:174] checking status of ha-710557-m04 ...
	I0908 12:50:24.001636  372746 cli_runner.go:164] Run: docker container inspect ha-710557-m04 --format={{.State.Status}}
	I0908 12:50:24.024627  372746 status.go:371] ha-710557-m04 host status = "Stopped" (err=<nil>)
	I0908 12:50:24.024650  372746 status.go:384] host is not running, skipping remaining checks
	I0908 12:50:24.024658  372746 status.go:176] ha-710557-m04 status: &{Name:ha-710557-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (32.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (108.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker
E0908 12:50:43.058267  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/functional-140475/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:51:10.770993  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/functional-140475/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-710557 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker: (1m47.254569837s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (108.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (38.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-710557 node add --control-plane --alsologtostderr -v 5: (37.618082798s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-710557 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-710557 status --alsologtostderr -v 5: (1.346064915s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (38.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.089218834s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.09s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (31.85s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -p image-988766 --driver=docker  --container-runtime=docker
image_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -p image-988766 --driver=docker  --container-runtime=docker: (31.851618273s)
--- PASS: TestImageBuild/serial/Setup (31.85s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.62s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-988766
image_test.go:78: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-988766: (1.620909374s)
--- PASS: TestImageBuild/serial/NormalBuild (1.62s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.92s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-988766
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.92s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.72s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-988766
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.72s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.96s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-988766
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.96s)

                                                
                                    
x
+
TestJSONOutput/start/Command (43.34s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-452689 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=docker
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-452689 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=docker: (43.331839527s)
--- PASS: TestJSONOutput/start/Command (43.34s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-452689 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.52s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-452689 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.52s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.85s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-452689 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-452689 --output=json --user=testUser: (5.844920494s)
--- PASS: TestJSONOutput/stop/Command (5.85s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-065395 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-065395 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (95.942457ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e4645772-dcae-4192-8048-1c0e33f52707","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-065395] minikube v1.36.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"45df7d47-5a3a-4bc9-a377-8c603bcf289c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21508"}}
	{"specversion":"1.0","id":"7685e0d9-c5af-4a75-b50f-627db3817354","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f8880de2-440c-41fb-8d78-a9f78a6f5c3e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21508-272936/kubeconfig"}}
	{"specversion":"1.0","id":"8cf90637-8aee-4d80-9bfd-0347db42d084","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-272936/.minikube"}}
	{"specversion":"1.0","id":"d9b877f4-73c1-4cf1-a23f-fdd70da7e330","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"3e443fbc-7c27-4af5-8271-25393c3596b0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"36193e85-b2c8-4b57-a316-76cbd3448853","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-065395" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-065395
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (35.18s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-796970 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-796970 --network=: (32.975681211s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-796970" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-796970
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-796970: (2.178068893s)
--- PASS: TestKicCustomNetwork/create_custom_network (35.18s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (34.59s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-015685 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-015685 --network=bridge: (32.533968201s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-015685" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-015685
E0908 12:55:43.058505  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/functional-140475/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-015685: (2.034046978s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (34.59s)

                                                
                                    
x
+
TestKicExistingNetwork (32.39s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0908 12:55:43.135210  274796 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0908 12:55:43.149635  274796 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0908 12:55:43.149707  274796 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0908 12:55:43.149725  274796 cli_runner.go:164] Run: docker network inspect existing-network
W0908 12:55:43.165090  274796 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0908 12:55:43.165120  274796 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0908 12:55:43.165134  274796 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0908 12:55:43.165306  274796 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0908 12:55:43.181658  274796 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c48bd151818a IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:2a:c1:2f:d6:24:b4} reservation:<nil>}
I0908 12:55:43.181906  274796 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400171e720}
I0908 12:55:43.181923  274796 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0908 12:55:43.181972  274796 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0908 12:55:43.242292  274796 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-316483 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-316483 --network=existing-network: (30.191067853s)
helpers_test.go:175: Cleaning up "existing-network-316483" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-316483
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-316483: (2.053734493s)
I0908 12:56:15.503662  274796 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (32.39s)

                                                
                                    
x
+
TestKicCustomSubnet (36.05s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-667725 --subnet=192.168.60.0/24
E0908 12:56:42.076271  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/addons-812729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-667725 --subnet=192.168.60.0/24: (33.839897367s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-667725 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-667725" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-667725
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-667725: (2.184662251s)
--- PASS: TestKicCustomSubnet (36.05s)

                                                
                                    
x
+
TestKicStaticIP (33.95s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-579918 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-579918 --static-ip=192.168.200.200: (31.677341155s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-579918 ip
helpers_test.go:175: Cleaning up "static-ip-579918" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-579918
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-579918: (2.119849527s)
--- PASS: TestKicStaticIP (33.95s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (69.95s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-219013 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-219013 --driver=docker  --container-runtime=docker: (29.4374193s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-222147 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-222147 --driver=docker  --container-runtime=docker: (34.714562019s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-219013
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-222147
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-222147" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-222147
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-222147: (2.227973209s)
helpers_test.go:175: Cleaning up "first-219013" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-219013
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-219013: (2.148510034s)
--- PASS: TestMinikubeProfile (69.95s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.04s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-735256 --memory=3072 --mount-string /tmp/TestMountStartserial4247421929/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
E0908 12:58:38.998310  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/addons-812729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-735256 --memory=3072 --mount-string /tmp/TestMountStartserial4247421929/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (7.037080058s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.04s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-735256 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.72s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-737299 --memory=3072 --mount-string /tmp/TestMountStartserial4247421929/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-737299 --memory=3072 --mount-string /tmp/TestMountStartserial4247421929/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (7.718743892s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-737299 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.51s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-735256 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-735256 --alsologtostderr -v=5: (1.513499134s)
--- PASS: TestMountStart/serial/DeleteFirst (1.51s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-737299 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-737299
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-737299: (1.202317884s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.88s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-737299
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-737299: (7.87488611s)
--- PASS: TestMountStart/serial/RestartStopped (8.88s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-737299 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (67.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-029261 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=docker
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-029261 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=docker: (1m7.292689495s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029261 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (67.82s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (48.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-029261 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-029261 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-029261 -- rollout status deployment/busybox: (3.16666807s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-029261 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0908 13:00:17.917224  274796 retry.go:31] will retry after 766.74003ms: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-029261 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0908 13:00:18.864213  274796 retry.go:31] will retry after 1.209459522s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-029261 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0908 13:00:20.233200  274796 retry.go:31] will retry after 1.658422521s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-029261 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0908 13:00:22.044028  274796 retry.go:31] will retry after 4.950761572s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-029261 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0908 13:00:27.160272  274796 retry.go:31] will retry after 2.912488403s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-029261 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0908 13:00:30.229085  274796 retry.go:31] will retry after 3.851821497s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-029261 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0908 13:00:34.234840  274796 retry.go:31] will retry after 11.773180928s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
E0908 13:00:43.060583  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/functional-140475/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-029261 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0908 13:00:46.165574  274796 retry.go:31] will retry after 14.066189426s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-029261 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-029261 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-029261 -- exec busybox-7b57f96db7-rcqzq -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-029261 -- exec busybox-7b57f96db7-zs7fl -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-029261 -- exec busybox-7b57f96db7-rcqzq -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-029261 -- exec busybox-7b57f96db7-zs7fl -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-029261 -- exec busybox-7b57f96db7-rcqzq -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-029261 -- exec busybox-7b57f96db7-zs7fl -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (48.04s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-029261 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-029261 -- exec busybox-7b57f96db7-rcqzq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-029261 -- exec busybox-7b57f96db7-rcqzq -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-029261 -- exec busybox-7b57f96db7-zs7fl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-029261 -- exec busybox-7b57f96db7-zs7fl -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.07s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (15.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-029261 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-029261 -v=5 --alsologtostderr: (15.066396207s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029261 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (15.82s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-029261 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.12s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.94s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029261 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029261 cp testdata/cp-test.txt multinode-029261:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029261 ssh -n multinode-029261 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029261 cp multinode-029261:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3147572739/001/cp-test_multinode-029261.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029261 ssh -n multinode-029261 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029261 cp multinode-029261:/home/docker/cp-test.txt multinode-029261-m02:/home/docker/cp-test_multinode-029261_multinode-029261-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029261 ssh -n multinode-029261 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029261 ssh -n multinode-029261-m02 "sudo cat /home/docker/cp-test_multinode-029261_multinode-029261-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029261 cp multinode-029261:/home/docker/cp-test.txt multinode-029261-m03:/home/docker/cp-test_multinode-029261_multinode-029261-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029261 ssh -n multinode-029261 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029261 ssh -n multinode-029261-m03 "sudo cat /home/docker/cp-test_multinode-029261_multinode-029261-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029261 cp testdata/cp-test.txt multinode-029261-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029261 ssh -n multinode-029261-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029261 cp multinode-029261-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3147572739/001/cp-test_multinode-029261-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029261 ssh -n multinode-029261-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029261 cp multinode-029261-m02:/home/docker/cp-test.txt multinode-029261:/home/docker/cp-test_multinode-029261-m02_multinode-029261.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029261 ssh -n multinode-029261-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029261 ssh -n multinode-029261 "sudo cat /home/docker/cp-test_multinode-029261-m02_multinode-029261.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029261 cp multinode-029261-m02:/home/docker/cp-test.txt multinode-029261-m03:/home/docker/cp-test_multinode-029261-m02_multinode-029261-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029261 ssh -n multinode-029261-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029261 ssh -n multinode-029261-m03 "sudo cat /home/docker/cp-test_multinode-029261-m02_multinode-029261-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029261 cp testdata/cp-test.txt multinode-029261-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029261 ssh -n multinode-029261-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029261 cp multinode-029261-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3147572739/001/cp-test_multinode-029261-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029261 ssh -n multinode-029261-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029261 cp multinode-029261-m03:/home/docker/cp-test.txt multinode-029261:/home/docker/cp-test_multinode-029261-m03_multinode-029261.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029261 ssh -n multinode-029261-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029261 ssh -n multinode-029261 "sudo cat /home/docker/cp-test_multinode-029261-m03_multinode-029261.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029261 cp multinode-029261-m03:/home/docker/cp-test.txt multinode-029261-m02:/home/docker/cp-test_multinode-029261-m03_multinode-029261-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029261 ssh -n multinode-029261-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029261 ssh -n multinode-029261-m02 "sudo cat /home/docker/cp-test_multinode-029261-m03_multinode-029261-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.63s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029261 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-029261 node stop m03: (1.246918815s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029261 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-029261 status: exit status 7 (502.940876ms)

                                                
                                                
-- stdout --
	multinode-029261
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-029261-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-029261-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029261 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-029261 status --alsologtostderr: exit status 7 (530.942835ms)

                                                
                                                
-- stdout --
	multinode-029261
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-029261-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-029261-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 13:01:32.769716  447417 out.go:360] Setting OutFile to fd 1 ...
	I0908 13:01:32.769956  447417 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:01:32.769982  447417 out.go:374] Setting ErrFile to fd 2...
	I0908 13:01:32.770018  447417 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:01:32.770536  447417 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-272936/.minikube/bin
	I0908 13:01:32.770903  447417 out.go:368] Setting JSON to false
	I0908 13:01:32.770965  447417 mustload.go:65] Loading cluster: multinode-029261
	I0908 13:01:32.772571  447417 config.go:182] Loaded profile config "multinode-029261": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 13:01:32.772643  447417 status.go:174] checking status of multinode-029261 ...
	I0908 13:01:32.774335  447417 notify.go:220] Checking for updates...
	I0908 13:01:32.774836  447417 cli_runner.go:164] Run: docker container inspect multinode-029261 --format={{.State.Status}}
	I0908 13:01:32.793689  447417 status.go:371] multinode-029261 host status = "Running" (err=<nil>)
	I0908 13:01:32.793714  447417 host.go:66] Checking if "multinode-029261" exists ...
	I0908 13:01:32.794172  447417 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-029261
	I0908 13:01:32.818079  447417 host.go:66] Checking if "multinode-029261" exists ...
	I0908 13:01:32.818394  447417 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 13:01:32.818443  447417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-029261
	I0908 13:01:32.843962  447417 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33273 SSHKeyPath:/home/jenkins/minikube-integration/21508-272936/.minikube/machines/multinode-029261/id_rsa Username:docker}
	I0908 13:01:32.937376  447417 ssh_runner.go:195] Run: systemctl --version
	I0908 13:01:32.941828  447417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 13:01:32.953861  447417 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 13:01:33.030071  447417 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-09-08 13:01:33.002269235 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 13:01:33.030661  447417 kubeconfig.go:125] found "multinode-029261" server: "https://192.168.67.2:8443"
	I0908 13:01:33.030694  447417 api_server.go:166] Checking apiserver status ...
	I0908 13:01:33.030737  447417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 13:01:33.043751  447417 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2262/cgroup
	I0908 13:01:33.054184  447417 api_server.go:182] apiserver freezer: "12:freezer:/docker/9d7ea184535136650b02b4d9d026ad26afaa7dd0c484eda08450eea3b68c2046/kubepods/burstable/pod733c132d867e910f13e8c23f19f7912d/6f79fa402bc7a83ea519dde56eec659a84559b6513d3df1b807645319e065455"
	I0908 13:01:33.054302  447417 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/9d7ea184535136650b02b4d9d026ad26afaa7dd0c484eda08450eea3b68c2046/kubepods/burstable/pod733c132d867e910f13e8c23f19f7912d/6f79fa402bc7a83ea519dde56eec659a84559b6513d3df1b807645319e065455/freezer.state
	I0908 13:01:33.063680  447417 api_server.go:204] freezer state: "THAWED"
	I0908 13:01:33.063710  447417 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0908 13:01:33.071886  447417 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0908 13:01:33.071918  447417 status.go:463] multinode-029261 apiserver status = Running (err=<nil>)
	I0908 13:01:33.071932  447417 status.go:176] multinode-029261 status: &{Name:multinode-029261 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 13:01:33.071953  447417 status.go:174] checking status of multinode-029261-m02 ...
	I0908 13:01:33.072327  447417 cli_runner.go:164] Run: docker container inspect multinode-029261-m02 --format={{.State.Status}}
	I0908 13:01:33.089308  447417 status.go:371] multinode-029261-m02 host status = "Running" (err=<nil>)
	I0908 13:01:33.089335  447417 host.go:66] Checking if "multinode-029261-m02" exists ...
	I0908 13:01:33.089636  447417 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-029261-m02
	I0908 13:01:33.106930  447417 host.go:66] Checking if "multinode-029261-m02" exists ...
	I0908 13:01:33.107278  447417 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 13:01:33.107327  447417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-029261-m02
	I0908 13:01:33.128262  447417 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33278 SSHKeyPath:/home/jenkins/minikube-integration/21508-272936/.minikube/machines/multinode-029261-m02/id_rsa Username:docker}
	I0908 13:01:33.217315  447417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 13:01:33.229383  447417 status.go:176] multinode-029261-m02 status: &{Name:multinode-029261-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0908 13:01:33.229428  447417 status.go:174] checking status of multinode-029261-m03 ...
	I0908 13:01:33.229734  447417 cli_runner.go:164] Run: docker container inspect multinode-029261-m03 --format={{.State.Status}}
	I0908 13:01:33.246731  447417 status.go:371] multinode-029261-m03 host status = "Stopped" (err=<nil>)
	I0908 13:01:33.246763  447417 status.go:384] host is not running, skipping remaining checks
	I0908 13:01:33.246770  447417 status.go:176] multinode-029261-m03 status: &{Name:multinode-029261-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.28s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029261 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-029261 node start m03 -v=5 --alsologtostderr: (8.235879092s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029261 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.06s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (74.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-029261
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-029261
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-029261: (22.760113121s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-029261 --wait=true -v=5 --alsologtostderr
E0908 13:02:06.132974  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/functional-140475/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-029261 --wait=true -v=5 --alsologtostderr: (51.969528342s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-029261
--- PASS: TestMultiNode/serial/RestartKeepsNodes (74.86s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029261 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-029261 node delete m03: (4.974528756s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029261 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.65s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029261 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-029261 stop: (21.590796274s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029261 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-029261 status: exit status 7 (90.092352ms)

                                                
                                                
-- stdout --
	multinode-029261
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-029261-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029261 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-029261 status --alsologtostderr: exit status 7 (91.027167ms)

                                                
                                                
-- stdout --
	multinode-029261
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-029261-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 13:03:24.558660  460782 out.go:360] Setting OutFile to fd 1 ...
	I0908 13:03:24.558793  460782 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:03:24.558804  460782 out.go:374] Setting ErrFile to fd 2...
	I0908 13:03:24.558809  460782 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:03:24.559076  460782 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-272936/.minikube/bin
	I0908 13:03:24.559302  460782 out.go:368] Setting JSON to false
	I0908 13:03:24.559361  460782 mustload.go:65] Loading cluster: multinode-029261
	I0908 13:03:24.559453  460782 notify.go:220] Checking for updates...
	I0908 13:03:24.559815  460782 config.go:182] Loaded profile config "multinode-029261": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 13:03:24.559839  460782 status.go:174] checking status of multinode-029261 ...
	I0908 13:03:24.560498  460782 cli_runner.go:164] Run: docker container inspect multinode-029261 --format={{.State.Status}}
	I0908 13:03:24.579433  460782 status.go:371] multinode-029261 host status = "Stopped" (err=<nil>)
	I0908 13:03:24.579503  460782 status.go:384] host is not running, skipping remaining checks
	I0908 13:03:24.579519  460782 status.go:176] multinode-029261 status: &{Name:multinode-029261 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 13:03:24.579564  460782 status.go:174] checking status of multinode-029261-m02 ...
	I0908 13:03:24.579880  460782 cli_runner.go:164] Run: docker container inspect multinode-029261-m02 --format={{.State.Status}}
	I0908 13:03:24.597532  460782 status.go:371] multinode-029261-m02 host status = "Stopped" (err=<nil>)
	I0908 13:03:24.597556  460782 status.go:384] host is not running, skipping remaining checks
	I0908 13:03:24.597564  460782 status.go:176] multinode-029261-m02 status: &{Name:multinode-029261-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.77s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (51.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-029261 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=docker
E0908 13:03:38.998113  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/addons-812729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-029261 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=docker: (50.770127988s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-029261 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (51.45s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (36.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-029261
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-029261-m02 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-029261-m02 --driver=docker  --container-runtime=docker: exit status 14 (108.36381ms)

                                                
                                                
-- stdout --
	* [multinode-029261-m02] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21508
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21508-272936/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-272936/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-029261-m02' is duplicated with machine name 'multinode-029261-m02' in profile 'multinode-029261'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-029261-m03 --driver=docker  --container-runtime=docker
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-029261-m03 --driver=docker  --container-runtime=docker: (33.944303666s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-029261
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-029261: exit status 80 (323.871978ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-029261 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-029261-m03 already exists in multinode-029261-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-029261-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-029261-m03: (2.165095813s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (36.60s)

                                                
                                    
x
+
TestPreload (149.63s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-025569 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.0
E0908 13:05:43.058597  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/functional-140475/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-025569 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.0: (1m18.411195448s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-025569 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-arm64 -p test-preload-025569 image pull gcr.io/k8s-minikube/busybox: (2.291112497s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-025569
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-025569: (10.817439238s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-025569 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
preload_test.go:65: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-025569 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (55.723051372s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-025569 image list
helpers_test.go:175: Cleaning up "test-preload-025569" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-025569
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-025569: (2.159674949s)
--- PASS: TestPreload (149.63s)

                                                
                                    
x
+
TestScheduledStopUnix (105.85s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-144122 --memory=3072 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-144122 --memory=3072 --driver=docker  --container-runtime=docker: (32.604759698s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-144122 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-144122 -n scheduled-stop-144122
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-144122 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0908 13:07:59.449570  274796 retry.go:31] will retry after 52.29µs: open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/scheduled-stop-144122/pid: no such file or directory
I0908 13:07:59.450746  274796 retry.go:31] will retry after 162.166µs: open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/scheduled-stop-144122/pid: no such file or directory
I0908 13:07:59.457865  274796 retry.go:31] will retry after 224.101µs: open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/scheduled-stop-144122/pid: no such file or directory
I0908 13:07:59.459027  274796 retry.go:31] will retry after 483.268µs: open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/scheduled-stop-144122/pid: no such file or directory
I0908 13:07:59.460262  274796 retry.go:31] will retry after 676.128µs: open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/scheduled-stop-144122/pid: no such file or directory
I0908 13:07:59.461399  274796 retry.go:31] will retry after 916.706µs: open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/scheduled-stop-144122/pid: no such file or directory
I0908 13:07:59.462533  274796 retry.go:31] will retry after 1.02754ms: open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/scheduled-stop-144122/pid: no such file or directory
I0908 13:07:59.463658  274796 retry.go:31] will retry after 2.451655ms: open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/scheduled-stop-144122/pid: no such file or directory
I0908 13:07:59.468026  274796 retry.go:31] will retry after 2.675053ms: open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/scheduled-stop-144122/pid: no such file or directory
I0908 13:07:59.471297  274796 retry.go:31] will retry after 2.696323ms: open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/scheduled-stop-144122/pid: no such file or directory
I0908 13:07:59.474590  274796 retry.go:31] will retry after 4.016549ms: open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/scheduled-stop-144122/pid: no such file or directory
I0908 13:07:59.478743  274796 retry.go:31] will retry after 5.499807ms: open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/scheduled-stop-144122/pid: no such file or directory
I0908 13:07:59.484974  274796 retry.go:31] will retry after 14.343243ms: open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/scheduled-stop-144122/pid: no such file or directory
I0908 13:07:59.500219  274796 retry.go:31] will retry after 23.808144ms: open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/scheduled-stop-144122/pid: no such file or directory
I0908 13:07:59.524454  274796 retry.go:31] will retry after 18.988771ms: open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/scheduled-stop-144122/pid: no such file or directory
I0908 13:07:59.543703  274796 retry.go:31] will retry after 51.973236ms: open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/scheduled-stop-144122/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-144122 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-144122 -n scheduled-stop-144122
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-144122
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-144122 --schedule 15s
E0908 13:08:38.998687  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/addons-812729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-144122
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-144122: exit status 7 (69.077167ms)

                                                
                                                
-- stdout --
	scheduled-stop-144122
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-144122 -n scheduled-stop-144122
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-144122 -n scheduled-stop-144122: exit status 7 (68.542693ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-144122" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-144122
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-144122: (1.669533997s)
--- PASS: TestScheduledStopUnix (105.85s)

                                                
                                    
x
+
TestSkaffold (145s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe3892025760 version
skaffold_test.go:63: skaffold version: v2.16.1
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p skaffold-520822 --memory=3072 --driver=docker  --container-runtime=docker
skaffold_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p skaffold-520822 --memory=3072 --driver=docker  --container-runtime=docker: (35.964464229s)
skaffold_test.go:86: copying out/minikube-linux-arm64 to /home/jenkins/workspace/Docker_Linux_docker_arm64/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe3892025760 run --minikube-profile skaffold-520822 --kube-context skaffold-520822 --status-check=true --port-forward=false --interactive=false
E0908 13:10:43.060261  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/functional-140475/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe3892025760 run --minikube-profile skaffold-520822 --kube-context skaffold-520822 --status-check=true --port-forward=false --interactive=false: (1m33.325684688s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:352: "leeroy-app-594785fdd6-cspbw" [faa4a9c0-0107-4dfb-9e14-d8955616157b] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.003819632s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:352: "leeroy-web-5b6c899b7c-g8lvz" [732bfd7e-ae09-49a7-834d-284542e4417a] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.004081489s
helpers_test.go:175: Cleaning up "skaffold-520822" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p skaffold-520822
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p skaffold-520822: (3.058625675s)
--- PASS: TestSkaffold (145.00s)

                                                
                                    
x
+
TestInsufficientStorage (10.91s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-739486 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-739486 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (8.611062244s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"3cb2154b-2a87-4016-97a9-c16c7085318c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-739486] minikube v1.36.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a331a799-0b02-47d9-a805-518892d7c0c9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21508"}}
	{"specversion":"1.0","id":"a498f132-2d95-4431-8bcb-ca8eb7a76006","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"64fc799b-cb3d-49a6-b297-c2ca5c33d995","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21508-272936/kubeconfig"}}
	{"specversion":"1.0","id":"260f81e0-6990-4e5e-acdd-aca6a81037d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-272936/.minikube"}}
	{"specversion":"1.0","id":"c84d02dd-fad0-4105-b445-eede67f0c849","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"85ae6149-6ee1-42d2-8748-8f48068374d1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"0ceca7b6-ffe1-4f6a-be0f-daed2539f152","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"379ece28-e486-4045-84e2-5b1fee09049b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"b9f78262-5ed8-482e-8883-cab356ee3c6f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"2bb71df6-a104-4825-be77-9169d382045d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"36ce2dc9-6178-4e57-a19c-4b631bfb7ee4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-739486\" primary control-plane node in \"insufficient-storage-739486\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"d966e1ee-b293-4d0e-a878-12246d68333a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.47-1756980985-21488 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"e8f294e8-4b78-45e8-b72f-1ede2b8039c9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"0ff27d1f-0cd2-4b93-ae76-a7233817a52d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-739486 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-739486 --output=json --layout=cluster: exit status 7 (301.084452ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-739486","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.36.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-739486","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0908 13:11:46.093467  495104 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-739486" does not appear in /home/jenkins/minikube-integration/21508-272936/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-739486 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-739486 --output=json --layout=cluster: exit status 7 (292.742535ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-739486","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.36.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-739486","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0908 13:11:46.386399  495168 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-739486" does not appear in /home/jenkins/minikube-integration/21508-272936/kubeconfig
	E0908 13:11:46.396746  495168 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/insufficient-storage-739486/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-739486" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-739486
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-739486: (1.699523584s)
--- PASS: TestInsufficientStorage (10.91s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (88.74s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.3364060895 start -p running-upgrade-756382 --memory=3072 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.3364060895 start -p running-upgrade-756382 --memory=3072 --vm-driver=docker  --container-runtime=docker: (57.150434275s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-756382 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0908 13:15:43.057920  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/functional-140475/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-756382 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (28.17692784s)
helpers_test.go:175: Cleaning up "running-upgrade-756382" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-756382
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-756382: (2.255616742s)
--- PASS: TestRunningBinaryUpgrade (88.74s)

                                                
                                    
x
+
TestKubernetesUpgrade (376.75s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-444653 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-444653 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (41.224325597s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-444653
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-444653: (2.15114284s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-444653 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-444653 status --format={{.Host}}: exit status 7 (118.934333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-444653 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-444653 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m44.310520501s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-444653 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-444653 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-444653 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker: exit status 106 (206.509314ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-444653] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21508
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21508-272936/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-272936/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-444653
	    minikube start -p kubernetes-upgrade-444653 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4446532 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.0, by running:
	    
	    minikube start -p kubernetes-upgrade-444653 --kubernetes-version=v1.34.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-444653 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-444653 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (45.829113956s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-444653" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-444653
E0908 13:23:38.998304  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/addons-812729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-444653: (2.677263168s)
--- PASS: TestKubernetesUpgrade (376.75s)

                                                
                                    
x
+
TestMissingContainerUpgrade (90.31s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.1528153345 start -p missing-upgrade-355554 --memory=3072 --driver=docker  --container-runtime=docker
E0908 13:16:23.109687  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/skaffold-520822/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:16:23.116183  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/skaffold-520822/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:16:23.127575  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/skaffold-520822/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:16:23.149034  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/skaffold-520822/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:16:23.190528  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/skaffold-520822/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:16:23.271952  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/skaffold-520822/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:16:23.433584  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/skaffold-520822/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:16:23.755324  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/skaffold-520822/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:16:24.396767  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/skaffold-520822/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:16:25.679067  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/skaffold-520822/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.1528153345 start -p missing-upgrade-355554 --memory=3072 --driver=docker  --container-runtime=docker: (33.143357372s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-355554
E0908 13:16:28.240402  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/skaffold-520822/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:16:33.362790  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/skaffold-520822/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-355554: (10.483696928s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-355554
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-355554 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0908 13:16:43.605037  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/skaffold-520822/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-355554 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (43.303745446s)
helpers_test.go:175: Cleaning up "missing-upgrade-355554" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-355554
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-355554: (2.137862037s)
--- PASS: TestMissingContainerUpgrade (90.31s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.02s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
E0908 13:17:45.060869  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/skaffold-520822/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStoppedBinaryUpgrade/Setup (1.02s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (76.46s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.547754483 start -p stopped-upgrade-011183 --memory=3072 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.547754483 start -p stopped-upgrade-011183 --memory=3072 --vm-driver=docker  --container-runtime=docker: (43.541631781s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.547754483 -p stopped-upgrade-011183 stop
E0908 13:18:38.997718  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/addons-812729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.547754483 -p stopped-upgrade-011183 stop: (10.877009963s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-011183 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0908 13:18:46.134629  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/functional-140475/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-011183 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (22.044840408s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (76.46s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.15s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-011183
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-011183: (1.149697463s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.15s)

                                                
                                    
x
+
TestPause/serial/Start (78.2s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-820649 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
E0908 13:19:06.983991  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/skaffold-520822/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-820649 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (1m18.199278587s)
--- PASS: TestPause/serial/Start (78.20s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (55.09s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-820649 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0908 13:20:43.060298  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/functional-140475/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-820649 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (55.070134268s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (55.09s)

                                                
                                    
x
+
TestPause/serial/Pause (0.64s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-820649 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.64s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.4s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-820649 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-820649 --output=json --layout=cluster: exit status 2 (396.943362ms)

                                                
                                                
-- stdout --
	{"Name":"pause-820649","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.36.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-820649","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.40s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.7s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-820649 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.70s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.7s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-820649 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.70s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.32s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-820649 --alsologtostderr -v=5
E0908 13:21:23.107529  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/skaffold-520822/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-820649 --alsologtostderr -v=5: (2.323120532s)
--- PASS: TestPause/serial/DeletePaused (2.32s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.43s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-820649
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-820649: exit status 1 (17.297112ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-820649: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-063428 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-063428 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker: exit status 14 (98.296155ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-063428] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21508
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21508-272936/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-272936/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (38.83s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-063428 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E0908 13:21:50.825368  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/skaffold-520822/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-063428 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (38.319977106s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-063428 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (38.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (18.55s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-063428 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-063428 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (16.475113641s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-063428 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-063428 status -o json: exit status 2 (347.779267ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-063428","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-063428
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-063428: (1.73040808s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (18.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-063428 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-063428 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (7.19195411s)
--- PASS: TestNoKubernetes/serial/Start (7.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-063428 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-063428 "sudo systemctl is-active --quiet service kubelet": exit status 1 (295.253287ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-063428
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-063428: (1.209530816s)
--- PASS: TestNoKubernetes/serial/Stop (1.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (10.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-063428 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-063428 --driver=docker  --container-runtime=docker: (10.329657574s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (10.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.46s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-063428 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-063428 "sudo systemctl is-active --quiet service kubelet": exit status 1 (464.154319ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (74.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-121975 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-121975 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (1m14.523094078s)
--- PASS: TestNetworkPlugins/group/auto/Start (74.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (73.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-121975 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-121975 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (1m13.445909374s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (73.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-121975 "pgrep -a kubelet"
I0908 13:23:58.678064  274796 config.go:182] Loaded profile config "auto-121975": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-121975 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-l5mzn" [085465a2-4e5c-41eb-82f0-a6c250c6a290] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-l5mzn" [085465a2-4e5c-41eb-82f0-a6c250c6a290] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.003647853s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-121975 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-121975 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-121975 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (84.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-121975 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-121975 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m24.665787708s)
--- PASS: TestNetworkPlugins/group/calico/Start (84.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-rt99j" [e6456185-1132-432d-a5cc-171d0b86a079] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003518449s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-121975 "pgrep -a kubelet"
I0908 13:24:59.125447  274796 config.go:182] Loaded profile config "kindnet-121975": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-121975 replace --force -f testdata/netcat-deployment.yaml
I0908 13:24:59.460017  274796 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-spr28" [684d9948-ec2b-478f-8c78-562db9559e3f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-spr28" [684d9948-ec2b-478f-8c78-562db9559e3f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004288709s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-121975 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-121975 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-121975 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (64.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-121975 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
E0908 13:25:43.058534  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/functional-140475/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-121975 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (1m4.650225584s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (64.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-bs244" [7543cdaa-5c5c-461e-ab56-fddbc27a903c] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.009895732s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-121975 "pgrep -a kubelet"
I0908 13:26:06.177574  274796 config.go:182] Loaded profile config "calico-121975": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-121975 replace --force -f testdata/netcat-deployment.yaml
I0908 13:26:06.515157  274796 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-pr442" [5f6b6f4d-20a8-4f14-8b3e-7cb56ca4ef06] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-pr442" [5f6b6f4d-20a8-4f14-8b3e-7cb56ca4ef06] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.003603114s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-121975 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-121975 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-121975 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-121975 "pgrep -a kubelet"
I0908 13:26:42.918958  274796 config.go:182] Loaded profile config "custom-flannel-121975": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-121975 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-fqhc5" [72bde368-80df-479f-bfa4-a62205eed6af] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-fqhc5" [72bde368-80df-479f-bfa4-a62205eed6af] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004398631s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (78.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p false-121975 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p false-121975 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (1m18.274349818s)
--- PASS: TestNetworkPlugins/group/false/Start (78.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-121975 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-121975 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-121975 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (77.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-121975 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-121975 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (1m17.495615642s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (77.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p false-121975 "pgrep -a kubelet"
I0908 13:28:05.112599  274796 config.go:182] Loaded profile config "false-121975": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (10.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-121975 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-h5hnc" [dee50e4c-1f18-423b-89c6-0fa997f18296] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-h5hnc" [dee50e4c-1f18-423b-89c6-0fa997f18296] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 10.003997484s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (10.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-121975 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-121975 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-121975 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (79.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-121975 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
E0908 13:28:38.998551  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/addons-812729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-121975 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (1m19.931030756s)
--- PASS: TestNetworkPlugins/group/flannel/Start (79.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-121975 "pgrep -a kubelet"
I0908 13:28:39.639581  274796 config.go:182] Loaded profile config "enable-default-cni-121975": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-121975 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-bptzz" [846c8ee4-aa58-4c44-a0d4-6b9ba531b44c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-bptzz" [846c8ee4-aa58-4c44-a0d4-6b9ba531b44c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.004661286s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-121975 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-121975 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-121975 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (82.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-121975 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
E0908 13:29:19.518431  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/auto-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:29:40.007899  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/auto-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:29:52.784473  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/kindnet-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:29:52.790741  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/kindnet-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:29:52.802094  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/kindnet-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:29:52.823459  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/kindnet-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:29:52.864815  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/kindnet-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:29:52.946131  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/kindnet-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:29:53.107752  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/kindnet-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:29:53.429401  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/kindnet-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:29:54.070650  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/kindnet-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:29:55.351943  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/kindnet-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-121975 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (1m22.362385862s)
--- PASS: TestNetworkPlugins/group/bridge/Start (82.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-7l89c" [b87ff312-b54b-4225-a258-97c6b1c1faf2] Running
E0908 13:29:57.913311  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/kindnet-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.031625873s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-121975 "pgrep -a kubelet"
I0908 13:30:01.797141  274796 config.go:182] Loaded profile config "flannel-121975": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-121975 replace --force -f testdata/netcat-deployment.yaml
E0908 13:30:02.084871  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/addons-812729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-fbhxx" [1a5b4b47-9e72-40f1-9d8b-d89773225e28] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0908 13:30:03.035140  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/kindnet-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-fbhxx" [1a5b4b47-9e72-40f1-9d8b-d89773225e28] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.003688867s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-121975 exec deployment/netcat -- nslookup kubernetes.default
E0908 13:30:13.276816  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/kindnet-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-121975 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-121975 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (86.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kubenet-121975 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kubenet-121975 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (1m26.093963282s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (86.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-121975 "pgrep -a kubelet"
I0908 13:30:40.752703  274796 config.go:182] Loaded profile config "bridge-121975": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-121975 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-zsd5w" [eef2032e-e03a-4a9b-afd4-43e6fcb8a668] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0908 13:30:43.057997  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/functional-140475/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-zsd5w" [eef2032e-e03a-4a9b-afd4-43e6fcb8a668] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.005252569s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-121975 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-121975 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-121975 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (89.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-505180 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0
E0908 13:31:20.322181  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/calico-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:31:23.107146  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/skaffold-520822/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:31:40.804484  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/calico-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:31:42.891688  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/auto-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:31:43.397713  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/custom-flannel-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:31:43.404013  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/custom-flannel-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:31:43.415327  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/custom-flannel-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:31:43.436653  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/custom-flannel-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:31:43.477976  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/custom-flannel-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:31:43.559335  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/custom-flannel-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:31:43.720771  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/custom-flannel-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:31:44.042383  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/custom-flannel-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:31:44.684346  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/custom-flannel-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:31:45.966631  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/custom-flannel-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:31:48.527985  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/custom-flannel-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:31:53.650106  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/custom-flannel-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-505180 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0: (1m29.428071442s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (89.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kubenet-121975 "pgrep -a kubelet"
I0908 13:32:01.819354  274796 config.go:182] Loaded profile config "kubenet-121975": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (12.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-121975 replace --force -f testdata/netcat-deployment.yaml
I0908 13:32:02.196546  274796 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-rvcqf" [aa1cbbf0-7e4f-4537-8ebb-c7c273812365] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0908 13:32:03.891530  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/custom-flannel-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-rvcqf" [aa1cbbf0-7e4f-4537-8ebb-c7c273812365] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 12.003010814s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (12.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-121975 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-121975 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-121975 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.18s)
E0908 13:37:40.059896  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/flannel-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:37:43.122917  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/kubenet-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:37:45.489239  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/old-k8s-version-505180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:37:45.495589  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/old-k8s-version-505180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:37:45.506939  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/old-k8s-version-505180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:37:45.528335  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/old-k8s-version-505180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:37:45.569700  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/old-k8s-version-505180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:37:45.651531  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/old-k8s-version-505180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:37:45.812967  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/old-k8s-version-505180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:37:46.134647  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/old-k8s-version-505180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:37:46.776168  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/old-k8s-version-505180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:37:48.057787  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/old-k8s-version-505180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:37:50.619653  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/old-k8s-version-505180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:37:55.741264  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/old-k8s-version-505180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:38:05.405256  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/false-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:38:05.982778  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/old-k8s-version-505180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (61.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-686082 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0
E0908 13:32:36.641878  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/kindnet-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-686082 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0: (1m1.344462303s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (61.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.56s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-505180 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [b452f051-2c21-44ea-86f1-85fedd1d0de1] Pending
E0908 13:32:46.186963  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/skaffold-520822/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [b452f051-2c21-44ea-86f1-85fedd1d0de1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [b452f051-2c21-44ea-86f1-85fedd1d0de1] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.003545186s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-505180 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.75s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-505180 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-505180 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.578294859s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-505180 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.75s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-505180 --alsologtostderr -v=3
E0908 13:33:05.334900  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/custom-flannel-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:33:05.405447  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/false-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:33:05.411870  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/false-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:33:05.423329  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/false-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:33:05.444759  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/false-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:33:05.486089  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/false-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:33:05.567489  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/false-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:33:05.729014  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/false-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:33:06.051284  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/false-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:33:06.693078  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/false-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:33:07.975238  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/false-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-505180 --alsologtostderr -v=3: (11.025538563s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-505180 -n old-k8s-version-505180
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-505180 -n old-k8s-version-505180: exit status 7 (99.075687ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-505180 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (54.7s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-505180 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0
E0908 13:33:10.537626  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/false-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:33:15.658962  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/false-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:33:25.900757  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/false-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-505180 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0: (54.27056734s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-505180 -n old-k8s-version-505180
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (54.70s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.5s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-686082 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [aaf2a501-f568-4ea2-9bb0-ca6d8669d5d7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0908 13:33:38.998594  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/addons-812729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:33:39.954476  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/enable-default-cni-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:33:39.960921  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/enable-default-cni-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:33:39.972308  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/enable-default-cni-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:33:39.994502  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/enable-default-cni-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:33:40.035902  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/enable-default-cni-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:33:40.117855  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/enable-default-cni-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:33:40.279346  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/enable-default-cni-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:33:40.600850  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/enable-default-cni-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [aaf2a501-f568-4ea2-9bb0-ca6d8669d5d7] Running
E0908 13:33:41.243410  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/enable-default-cni-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:33:42.525517  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/enable-default-cni-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:33:43.687640  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/calico-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:33:45.087794  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/enable-default-cni-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:33:46.382384  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/false-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.003950596s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-686082 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.50s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.51s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-686082 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-686082 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.376376067s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-686082 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.51s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-686082 --alsologtostderr -v=3
E0908 13:33:50.209928  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/enable-default-cni-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:33:59.018683  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/auto-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-686082 --alsologtostderr -v=3: (11.124897388s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-686082 -n no-preload-686082
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-686082 -n no-preload-686082: exit status 7 (86.369751ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-686082 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (28.89s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-686082 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0
E0908 13:34:00.453139  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/enable-default-cni-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-686082 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0: (28.32979326s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-686082 -n no-preload-686082
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (28.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-mnpc4" [2b1df923-71aa-42a7-b106-e5fd37f5be94] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004005634s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-mnpc4" [2b1df923-71aa-42a7-b106-e5fd37f5be94] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003936518s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-505180 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-505180 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.84s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-505180 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p old-k8s-version-505180 --alsologtostderr -v=1: (1.050889249s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-505180 -n old-k8s-version-505180
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-505180 -n old-k8s-version-505180: exit status 2 (408.996028ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-505180 -n old-k8s-version-505180
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-505180 -n old-k8s-version-505180: exit status 2 (465.969576ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-505180 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-505180 -n old-k8s-version-505180
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-505180 -n old-k8s-version-505180
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.84s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (83.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-567168 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0
E0908 13:34:26.733564  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/auto-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:34:27.256211  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/custom-flannel-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:34:27.344672  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/false-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-567168 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0: (1m23.02354346s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (83.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (9s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-2jkwm" [1a20e5eb-c6ff-4313-bca7-8b806832582c] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-2jkwm" [1a20e5eb-c6ff-4313-bca7-8b806832582c] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 9.003647823s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (9.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-2jkwm" [1a20e5eb-c6ff-4313-bca7-8b806832582c] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003350242s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-686082 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-686082 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.88s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-686082 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-686082 -n no-preload-686082
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-686082 -n no-preload-686082: exit status 2 (523.471186ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-686082 -n no-preload-686082
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-686082 -n no-preload-686082: exit status 2 (382.348768ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-686082 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-686082 -n no-preload-686082
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-686082 -n no-preload-686082
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.88s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (56.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-560910 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0
E0908 13:34:52.785135  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/kindnet-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:34:56.189917  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/flannel-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:34:56.202299  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/flannel-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:34:56.218135  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/flannel-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:34:56.240125  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/flannel-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:34:56.281443  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/flannel-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:34:56.362782  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/flannel-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:34:56.524243  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/flannel-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:34:56.845836  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/flannel-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:34:57.487968  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/flannel-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:34:58.769445  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/flannel-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:35:01.331494  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/flannel-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:35:01.897086  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/enable-default-cni-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:35:06.453479  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/flannel-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:35:16.695586  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/flannel-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:35:20.483173  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/kindnet-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:35:26.136128  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/functional-140475/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:35:37.177026  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/flannel-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:35:41.210475  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/bridge-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:35:41.216838  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/bridge-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:35:41.228236  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/bridge-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:35:41.249708  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/bridge-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:35:41.291187  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/bridge-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:35:41.372798  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/bridge-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:35:41.534382  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/bridge-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:35:41.856134  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/bridge-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:35:42.498273  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/bridge-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:35:43.058464  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/functional-140475/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:35:43.779743  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/bridge-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:35:46.341616  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/bridge-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-560910 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0: (56.070545763s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (56.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-567168 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [1c3013f2-8288-4ca6-9a70-223cad2cf3fa] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [1c3013f2-8288-4ca6-9a70-223cad2cf3fa] Running
E0908 13:35:51.463925  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/bridge-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003667305s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-567168 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.54s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-560910 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [38dc71a1-a1af-44b0-8371-2b08a1d6208b] Pending
helpers_test.go:352: "busybox" [38dc71a1-a1af-44b0-8371-2b08a1d6208b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0908 13:35:49.266885  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/false-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [38dc71a1-a1af-44b0-8371-2b08a1d6208b] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.007530729s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-560910 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.54s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-567168 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-567168 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.002793757s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-567168 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-567168 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-567168 --alsologtostderr -v=3: (11.202853263s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.51s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-560910 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-560910 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.397463354s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-560910 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.51s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-560910 --alsologtostderr -v=3
E0908 13:35:59.822889  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/calico-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:36:01.706021  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/bridge-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-560910 --alsologtostderr -v=3: (11.145576026s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-567168 -n embed-certs-567168
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-567168 -n embed-certs-567168: exit status 7 (78.382724ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-567168 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (61.99s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-567168 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-567168 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0: (1m1.542472669s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-567168 -n embed-certs-567168
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (61.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-560910 -n default-k8s-diff-port-560910
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-560910 -n default-k8s-diff-port-560910: exit status 7 (91.541374ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-560910 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (61.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-560910 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0
E0908 13:36:18.138551  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/flannel-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:36:22.187782  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/bridge-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:36:23.107541  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/skaffold-520822/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:36:23.818622  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/enable-default-cni-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:36:27.529814  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/calico-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:36:43.397954  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/custom-flannel-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:37:02.145538  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/kubenet-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:37:02.151903  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/kubenet-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:37:02.163353  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/kubenet-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:37:02.184848  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/kubenet-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:37:02.226281  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/kubenet-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:37:02.307728  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/kubenet-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:37:02.469276  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/kubenet-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:37:02.790958  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/kubenet-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:37:03.149911  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/bridge-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:37:03.432577  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/kubenet-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:37:04.713986  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/kubenet-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:37:07.275933  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/kubenet-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-560910 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0: (1m1.623637725s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-560910 -n default-k8s-diff-port-560910
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (61.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-24rtk" [010e9a7b-1b97-495b-945d-43510dd2d6c7] Running
E0908 13:37:11.097873  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/custom-flannel-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:37:12.398104  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/kubenet-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003513698s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-fw6z4" [f902f14b-e314-4e1d-81ba-a6c304aa1925] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003397091s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-24rtk" [010e9a7b-1b97-495b-945d-43510dd2d6c7] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004296831s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-567168 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-fw6z4" [f902f14b-e314-4e1d-81ba-a6c304aa1925] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.013118203s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-560910 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-567168 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.63s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-567168 --alsologtostderr -v=1
E0908 13:37:22.640643  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/kubenet-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-567168 -n embed-certs-567168
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-567168 -n embed-certs-567168: exit status 2 (344.20748ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-567168 -n embed-certs-567168
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-567168 -n embed-certs-567168: exit status 2 (331.317034ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-567168 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-567168 -n embed-certs-567168
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-567168 -n embed-certs-567168
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.63s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-560910 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.81s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-560910 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-560910 --alsologtostderr -v=1: (1.231348395s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-560910 -n default-k8s-diff-port-560910
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-560910 -n default-k8s-diff-port-560910: exit status 2 (558.37723ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-560910 -n default-k8s-diff-port-560910
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-560910 -n default-k8s-diff-port-560910: exit status 2 (480.633063ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-560910 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 unpause -p default-k8s-diff-port-560910 --alsologtostderr -v=1: (1.007331942s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-560910 -n default-k8s-diff-port-560910
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-560910 -n default-k8s-diff-port-560910
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.81s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (38.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-188188 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-188188 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0: (38.376144766s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (38.38s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-188188 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-188188 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.094926248s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (8.96s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-188188 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-188188 --alsologtostderr -v=3: (8.958969739s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (8.96s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-188188 -n newest-cni-188188
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-188188 -n newest-cni-188188: exit status 7 (73.183683ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-188188 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (17.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-188188 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0
E0908 13:38:24.084874  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/kubenet-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:38:25.072203  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/bridge-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:38:26.464806  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/old-k8s-version-505180/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:38:33.110436  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/false-121975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-188188 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0: (16.753761159s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-188188 -n newest-cni-188188
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (17.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-188188 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.03s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-188188 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-188188 -n newest-cni-188188
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-188188 -n newest-cni-188188: exit status 2 (314.588797ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-188188 -n newest-cni-188188
E0908 13:38:36.753825  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/no-preload-686082/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:38:36.760193  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/no-preload-686082/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:38:36.771522  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/no-preload-686082/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:38:36.792864  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/no-preload-686082/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:38:36.834220  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/no-preload-686082/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:38:36.915575  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/no-preload-686082/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-188188 -n newest-cni-188188: exit status 2 (303.288939ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-188188 --alsologtostderr -v=1
E0908 13:38:37.077342  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/no-preload-686082/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:38:37.399554  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/no-preload-686082/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-188188 -n newest-cni-188188
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-188188 -n newest-cni-188188
E0908 13:38:38.041222  274796 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-272936/.minikube/profiles/no-preload-686082/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.03s)

                                                
                                    

Test skip (26/347)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.72s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-561441 --alsologtostderr --driver=docker  --container-runtime=docker
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-561441" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-561441
--- SKIP: TestDownloadOnlyKic (0.72s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-121975 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-121975

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-121975

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-121975

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-121975

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-121975

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-121975

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-121975

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-121975

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-121975

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-121975

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-121975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-121975"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-121975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-121975"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-121975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-121975"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-121975

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-121975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-121975"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-121975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-121975"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-121975" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-121975" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-121975" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-121975" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-121975" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-121975" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-121975" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-121975" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-121975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-121975"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-121975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-121975"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-121975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-121975"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-121975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-121975"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-121975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-121975"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-121975

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-121975

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-121975" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-121975" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-121975

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-121975

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-121975" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-121975" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-121975" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-121975" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-121975" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-121975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-121975"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-121975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-121975"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-121975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-121975"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-121975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-121975"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-121975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-121975"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-121975

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-121975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-121975"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-121975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-121975"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-121975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-121975"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-121975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-121975"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-121975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-121975"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-121975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-121975"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-121975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-121975"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-121975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-121975"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-121975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-121975"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-121975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-121975"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-121975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-121975"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-121975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-121975"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-121975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-121975"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-121975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-121975"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-121975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-121975"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-121975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-121975"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-121975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-121975"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-121975" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-121975"

                                                
                                                
----------------------- debugLogs end: cilium-121975 [took: 5.694990227s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-121975" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-121975
--- SKIP: TestNetworkPlugins/group/cilium (5.91s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-441445" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-441445
--- SKIP: TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                    
Copied to clipboard