Test Report: Docker_Linux_docker_arm64 21683

                    
                      cf2611189ddf0f856b4ad9653dc441b770ddd00e:2025-10-02:41739
                    
                

Test fail (7/347)

x
+
TestFunctional/parallel/DashboardCmd (302.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-460513 --alsologtostderr -v=1]
E1002 20:10:32.271926  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/addons-660088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:11:55.344691  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/addons-660088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-460513 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-460513 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-460513 --alsologtostderr -v=1] stderr:
I1002 20:08:20.604311  926552 out.go:360] Setting OutFile to fd 1 ...
I1002 20:08:20.605725  926552 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 20:08:20.605754  926552 out.go:374] Setting ErrFile to fd 2...
I1002 20:08:20.605771  926552 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 20:08:20.606095  926552 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-881023/.minikube/bin
I1002 20:08:20.606408  926552 mustload.go:65] Loading cluster: functional-460513
I1002 20:08:20.606822  926552 config.go:182] Loaded profile config "functional-460513": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1002 20:08:20.607402  926552 cli_runner.go:164] Run: docker container inspect functional-460513 --format={{.State.Status}}
I1002 20:08:20.631506  926552 host.go:66] Checking if "functional-460513" exists ...
I1002 20:08:20.631929  926552 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1002 20:08:20.691701  926552 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 20:08:20.678703839 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1002 20:08:20.691965  926552 api_server.go:166] Checking apiserver status ...
I1002 20:08:20.692080  926552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1002 20:08:20.692124  926552 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-460513
I1002 20:08:20.709797  926552 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33896 SSHKeyPath:/home/jenkins/minikube-integration/21683-881023/.minikube/machines/functional-460513/id_rsa Username:docker}
I1002 20:08:20.814392  926552 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/8243/cgroup
I1002 20:08:20.822668  926552 api_server.go:182] apiserver freezer: "7:freezer:/docker/b8078c0512be5e879b5134e27e6af3103f35cf57b611aaf29468582bfa479b7e/kubepods/burstable/podb7d4b2f81362e26fd96513505b6d8dc0/db9b1101b76ebf9d644569f9577bc46d29730b1552ff6f03e52c9553fecf7545"
I1002 20:08:20.822757  926552 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/b8078c0512be5e879b5134e27e6af3103f35cf57b611aaf29468582bfa479b7e/kubepods/burstable/podb7d4b2f81362e26fd96513505b6d8dc0/db9b1101b76ebf9d644569f9577bc46d29730b1552ff6f03e52c9553fecf7545/freezer.state
I1002 20:08:20.830513  926552 api_server.go:204] freezer state: "THAWED"
I1002 20:08:20.830544  926552 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
I1002 20:08:20.840434  926552 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
ok
W1002 20:08:20.840484  926552 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I1002 20:08:20.840674  926552 config.go:182] Loaded profile config "functional-460513": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1002 20:08:20.840689  926552 addons.go:69] Setting dashboard=true in profile "functional-460513"
I1002 20:08:20.840696  926552 addons.go:238] Setting addon dashboard=true in "functional-460513"
I1002 20:08:20.840723  926552 host.go:66] Checking if "functional-460513" exists ...
I1002 20:08:20.841129  926552 cli_runner.go:164] Run: docker container inspect functional-460513 --format={{.State.Status}}
I1002 20:08:20.867336  926552 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I1002 20:08:20.870338  926552 out.go:179]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I1002 20:08:20.873149  926552 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I1002 20:08:20.873180  926552 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I1002 20:08:20.873341  926552 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-460513
I1002 20:08:20.896862  926552 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33896 SSHKeyPath:/home/jenkins/minikube-integration/21683-881023/.minikube/machines/functional-460513/id_rsa Username:docker}
I1002 20:08:20.999687  926552 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I1002 20:08:20.999717  926552 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I1002 20:08:21.015057  926552 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I1002 20:08:21.015091  926552 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I1002 20:08:21.029916  926552 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I1002 20:08:21.029942  926552 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I1002 20:08:21.044319  926552 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I1002 20:08:21.044345  926552 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I1002 20:08:21.058527  926552 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I1002 20:08:21.058588  926552 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I1002 20:08:21.072713  926552 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I1002 20:08:21.072733  926552 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I1002 20:08:21.087454  926552 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I1002 20:08:21.087504  926552 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I1002 20:08:21.101859  926552 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I1002 20:08:21.101909  926552 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I1002 20:08:21.115930  926552 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I1002 20:08:21.115983  926552 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I1002 20:08:21.128954  926552 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1002 20:08:21.955250  926552 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-460513 addons enable metrics-server

                                                
                                                
I1002 20:08:21.958269  926552 addons.go:201] Writing out "functional-460513" config to set dashboard=true...
W1002 20:08:21.958578  926552 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I1002 20:08:21.959275  926552 kapi.go:59] client config for functional-460513: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-881023/.minikube/profiles/functional-460513/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-881023/.minikube/profiles/functional-460513/client.key", CAFile:"/home/jenkins/minikube-integration/21683-881023/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120200), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1002 20:08:21.959838  926552 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1002 20:08:21.959861  926552 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1002 20:08:21.959867  926552 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1002 20:08:21.959876  926552 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1002 20:08:21.959885  926552 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1002 20:08:21.976983  926552 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  7c183d51-45cd-4124-8c18-547dd1781a7c 1569 0 2025-10-02 20:08:21 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-10-02 20:08:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.111.21.165,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.111.21.165],IPFamilies:[IPv4],AllocateLoadBalance
rNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W1002 20:08:21.977158  926552 out.go:285] * Launching proxy ...
* Launching proxy ...
I1002 20:08:21.977291  926552 dashboard.go:152] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-460513 proxy --port 36195]
I1002 20:08:21.977563  926552 dashboard.go:157] Waiting for kubectl to output host:port ...
I1002 20:08:22.051323  926552 dashboard.go:175] proxy stdout: Starting to serve on 127.0.0.1:36195
W1002 20:08:22.051374  926552 out.go:285] * Verifying proxy health ...
* Verifying proxy health ...
I1002 20:08:22.071645  926552 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[077ceb23-637a-4253-b029-23d08100a88a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:08:22 GMT]] Body:0x40000ff180 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400043f7c0 TLS:<nil>}
I1002 20:08:22.071730  926552 retry.go:31] will retry after 128.346µs: Temporary Error: unexpected response code: 503
I1002 20:08:22.081026  926552 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9bbb3dd0-3096-4c4c-9c7d-49663a6cfc76] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:08:22 GMT]] Body:0x40007cdd40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400027c3c0 TLS:<nil>}
I1002 20:08:22.081111  926552 retry.go:31] will retry after 117.352µs: Temporary Error: unexpected response code: 503
I1002 20:08:22.085423  926552 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[00eb5d8d-a315-480d-a792-ee0f2ce06bbb] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:08:22 GMT]] Body:0x40000ff280 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400043f900 TLS:<nil>}
I1002 20:08:22.085509  926552 retry.go:31] will retry after 289.683µs: Temporary Error: unexpected response code: 503
I1002 20:08:22.090365  926552 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f0e0cf53-fd88-42f9-817d-e320027a1a47] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:08:22 GMT]] Body:0x40000ff340 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400027c500 TLS:<nil>}
I1002 20:08:22.090433  926552 retry.go:31] will retry after 183.876µs: Temporary Error: unexpected response code: 503
I1002 20:08:22.094622  926552 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[87ba962e-7d7e-4442-aef1-3c6ac86b3fee] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:08:22 GMT]] Body:0x4000682280 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400043fa40 TLS:<nil>}
I1002 20:08:22.094687  926552 retry.go:31] will retry after 458.6µs: Temporary Error: unexpected response code: 503
I1002 20:08:22.098845  926552 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a8a300fc-0921-4a6b-af63-b7e7667de2a5] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:08:22 GMT]] Body:0x4000682300 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400027c640 TLS:<nil>}
I1002 20:08:22.098916  926552 retry.go:31] will retry after 582.735µs: Temporary Error: unexpected response code: 503
I1002 20:08:22.103834  926552 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[207ee12f-93d4-4076-b93b-08b41b593c03] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:08:22 GMT]] Body:0x4000682380 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400027c8c0 TLS:<nil>}
I1002 20:08:22.103900  926552 retry.go:31] will retry after 591.949µs: Temporary Error: unexpected response code: 503
I1002 20:08:22.109020  926552 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ea69ed95-b1ab-4751-bd37-05a1e90e36e3] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:08:22 GMT]] Body:0x4000682440 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400027ca00 TLS:<nil>}
I1002 20:08:22.109087  926552 retry.go:31] will retry after 1.533459ms: Temporary Error: unexpected response code: 503
I1002 20:08:22.114295  926552 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6fcbe727-3faa-4b64-b181-58b042c28b2e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:08:22 GMT]] Body:0x40006824c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400027cdc0 TLS:<nil>}
I1002 20:08:22.114358  926552 retry.go:31] will retry after 2.329666ms: Temporary Error: unexpected response code: 503
I1002 20:08:22.120671  926552 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d998dbe4-6377-4b2e-8c21-62477a2e17e3] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:08:22 GMT]] Body:0x40000ff5c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400043fb80 TLS:<nil>}
I1002 20:08:22.120732  926552 retry.go:31] will retry after 4.975359ms: Temporary Error: unexpected response code: 503
I1002 20:08:22.129598  926552 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[16aa3b6d-f637-4238-9492-568845b64a4a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:08:22 GMT]] Body:0x4000682640 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400027cf00 TLS:<nil>}
I1002 20:08:22.129660  926552 retry.go:31] will retry after 8.169307ms: Temporary Error: unexpected response code: 503
I1002 20:08:22.144552  926552 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[bf507169-6724-4740-a7f1-6a4ec33e67e5] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:08:22 GMT]] Body:0x4000682740 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400043fe00 TLS:<nil>}
I1002 20:08:22.144625  926552 retry.go:31] will retry after 4.980271ms: Temporary Error: unexpected response code: 503
I1002 20:08:22.152855  926552 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5d2cbe40-4ca5-4418-9b2b-18b9034ab24b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:08:22 GMT]] Body:0x40000ff700 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400048e000 TLS:<nil>}
I1002 20:08:22.152943  926552 retry.go:31] will retry after 11.859304ms: Temporary Error: unexpected response code: 503
I1002 20:08:22.169476  926552 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b5c455f0-0b58-48d9-906d-56c4be539dcf] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:08:22 GMT]] Body:0x40000ff780 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400048e140 TLS:<nil>}
I1002 20:08:22.169572  926552 retry.go:31] will retry after 26.885539ms: Temporary Error: unexpected response code: 503
I1002 20:08:22.200253  926552 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[629878bd-74ae-4197-983d-74d4becf3920] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:08:22 GMT]] Body:0x4000682940 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400027d040 TLS:<nil>}
I1002 20:08:22.200318  926552 retry.go:31] will retry after 36.998487ms: Temporary Error: unexpected response code: 503
I1002 20:08:22.240591  926552 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b5d95e4f-aa6f-408e-b50c-c2a6dace7117] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:08:22 GMT]] Body:0x4000682a40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400048e280 TLS:<nil>}
I1002 20:08:22.240656  926552 retry.go:31] will retry after 43.877252ms: Temporary Error: unexpected response code: 503
I1002 20:08:22.288159  926552 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5048d1be-8b99-41c7-a1e5-921448660e89] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:08:22 GMT]] Body:0x4000682b00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400027d180 TLS:<nil>}
I1002 20:08:22.288233  926552 retry.go:31] will retry after 90.049932ms: Temporary Error: unexpected response code: 503
I1002 20:08:22.381864  926552 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[722924c9-c336-4ae6-a94f-9b9333441342] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:08:22 GMT]] Body:0x4000682b80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400027d2c0 TLS:<nil>}
I1002 20:08:22.381928  926552 retry.go:31] will retry after 61.001595ms: Temporary Error: unexpected response code: 503
I1002 20:08:22.446377  926552 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b68c7883-0c3e-43c9-83ff-d60264fc7c74] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:08:22 GMT]] Body:0x4000682c80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400027de00 TLS:<nil>}
I1002 20:08:22.446447  926552 retry.go:31] will retry after 186.248647ms: Temporary Error: unexpected response code: 503
I1002 20:08:22.637222  926552 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[84a9c669-6be2-460f-8dc5-e877d3c2e1d7] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:08:22 GMT]] Body:0x40000ffa80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40002d0000 TLS:<nil>}
I1002 20:08:22.637285  926552 retry.go:31] will retry after 115.077398ms: Temporary Error: unexpected response code: 503
I1002 20:08:22.755874  926552 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[166d3029-1cc1-46f5-8734-454cc4d08f32] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:08:22 GMT]] Body:0x4000682d40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400048e3c0 TLS:<nil>}
I1002 20:08:22.755946  926552 retry.go:31] will retry after 398.772602ms: Temporary Error: unexpected response code: 503
I1002 20:08:23.158681  926552 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[af00d3a1-018e-4989-8631-8fbe09c87663] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:08:23 GMT]] Body:0x4000682e80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400048e500 TLS:<nil>}
I1002 20:08:23.158760  926552 retry.go:31] will retry after 487.496813ms: Temporary Error: unexpected response code: 503
I1002 20:08:23.649552  926552 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f1ba6c34-f146-4f0d-9c40-8d63a568ef7c] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:08:23 GMT]] Body:0x40000ffb40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400048e640 TLS:<nil>}
I1002 20:08:23.649665  926552 retry.go:31] will retry after 526.696114ms: Temporary Error: unexpected response code: 503
I1002 20:08:24.180626  926552 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[268e3920-8d7d-412a-856d-f126be6f28ea] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:08:24 GMT]] Body:0x4000682f80 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40002d0140 TLS:<nil>}
I1002 20:08:24.180702  926552 retry.go:31] will retry after 1.32686554s: Temporary Error: unexpected response code: 503
I1002 20:08:25.510752  926552 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ddd8f53c-c71d-46ed-b180-fbfa5318f174] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:08:25 GMT]] Body:0x4000683080 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400048e780 TLS:<nil>}
I1002 20:08:25.510817  926552 retry.go:31] will retry after 2.119624721s: Temporary Error: unexpected response code: 503
I1002 20:08:27.634116  926552 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2fbc3ef2-accf-4da1-93a8-fd20c32e7ddb] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:08:27 GMT]] Body:0x40000ffc80 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400048e8c0 TLS:<nil>}
I1002 20:08:27.634201  926552 retry.go:31] will retry after 1.836507428s: Temporary Error: unexpected response code: 503
I1002 20:08:29.474137  926552 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e0f3f044-ccfa-4a83-b27c-5d903d8f73c2] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:08:29 GMT]] Body:0x4000683200 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40002d0280 TLS:<nil>}
I1002 20:08:29.474200  926552 retry.go:31] will retry after 3.586235995s: Temporary Error: unexpected response code: 503
I1002 20:08:33.064756  926552 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0a0a21c4-0bbf-4310-a1ad-a3793fa3e223] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:08:33 GMT]] Body:0x4000683b40 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400048ea00 TLS:<nil>}
I1002 20:08:33.064824  926552 retry.go:31] will retry after 6.420180889s: Temporary Error: unexpected response code: 503
I1002 20:08:39.488621  926552 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3b32c4ae-9186-40a1-afd4-695c1e7d66ae] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:08:39 GMT]] Body:0x4000683c00 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400048eb40 TLS:<nil>}
I1002 20:08:39.488740  926552 retry.go:31] will retry after 8.676547042s: Temporary Error: unexpected response code: 503
I1002 20:08:48.170714  926552 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8338a081-a59f-4c50-bcfc-4a24ff9d1bbd] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:08:48 GMT]] Body:0x4000683cc0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400048ec80 TLS:<nil>}
I1002 20:08:48.170778  926552 retry.go:31] will retry after 7.855718026s: Temporary Error: unexpected response code: 503
I1002 20:08:56.030096  926552 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9bb3717f-bc64-413e-ae29-855abd3d3ca1] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:08:56 GMT]] Body:0x4000683f40 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400048edc0 TLS:<nil>}
I1002 20:08:56.030205  926552 retry.go:31] will retry after 26.711375454s: Temporary Error: unexpected response code: 503
I1002 20:09:22.745175  926552 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b0159567-4d3d-49f9-a5ec-056af44a7d6c] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:09:22 GMT]] Body:0x40000ffec0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40002d03c0 TLS:<nil>}
I1002 20:09:22.745260  926552 retry.go:31] will retry after 24.540176246s: Temporary Error: unexpected response code: 503
I1002 20:09:47.289738  926552 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8472e881-f27a-4483-b831-7d65eb6787e3] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:09:47 GMT]] Body:0x4000824140 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400048ef00 TLS:<nil>}
I1002 20:09:47.289800  926552 retry.go:31] will retry after 37.029199269s: Temporary Error: unexpected response code: 503
I1002 20:10:24.323091  926552 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f8ecd2ec-3759-4e9f-901e-e9346bd9db85] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:10:24 GMT]] Body:0x40000fe0c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400048f040 TLS:<nil>}
I1002 20:10:24.323160  926552 retry.go:31] will retry after 1m22.480425161s: Temporary Error: unexpected response code: 503
I1002 20:11:46.806737  926552 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7d691c2e-4557-40ca-b82e-e2f192fe0e10] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:11:46 GMT]] Body:0x40000fea80 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40002d0500 TLS:<nil>}
I1002 20:11:46.806804  926552 retry.go:31] will retry after 1m13.835791241s: Temporary Error: unexpected response code: 503
I1002 20:13:00.646383  926552 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[73d4d864-2c33-4650-9ab6-d67ff34f1fea] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:13:00 GMT]] Body:0x4000824240 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400048f180 TLS:<nil>}
I1002 20:13:00.646457  926552 retry.go:31] will retry after 36.335242176s: Temporary Error: unexpected response code: 503
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-460513
helpers_test.go:243: (dbg) docker inspect functional-460513:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b8078c0512be5e879b5134e27e6af3103f35cf57b611aaf29468582bfa479b7e",
	        "Created": "2025-10-02T19:54:34.194287273Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 908898,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T19:54:34.236525194Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/b8078c0512be5e879b5134e27e6af3103f35cf57b611aaf29468582bfa479b7e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b8078c0512be5e879b5134e27e6af3103f35cf57b611aaf29468582bfa479b7e/hostname",
	        "HostsPath": "/var/lib/docker/containers/b8078c0512be5e879b5134e27e6af3103f35cf57b611aaf29468582bfa479b7e/hosts",
	        "LogPath": "/var/lib/docker/containers/b8078c0512be5e879b5134e27e6af3103f35cf57b611aaf29468582bfa479b7e/b8078c0512be5e879b5134e27e6af3103f35cf57b611aaf29468582bfa479b7e-json.log",
	        "Name": "/functional-460513",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-460513:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-460513",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b8078c0512be5e879b5134e27e6af3103f35cf57b611aaf29468582bfa479b7e",
	                "LowerDir": "/var/lib/docker/overlay2/cb2c9b449a3d89e392b79bf1325d8b59cc262f54a697e258214e3f921a516b36-init/diff:/var/lib/docker/overlay2/4168a6b35c0191bd222903a9b469ebe18ea5b9d5b6daa344f4a494c07b59f9f7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cb2c9b449a3d89e392b79bf1325d8b59cc262f54a697e258214e3f921a516b36/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cb2c9b449a3d89e392b79bf1325d8b59cc262f54a697e258214e3f921a516b36/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cb2c9b449a3d89e392b79bf1325d8b59cc262f54a697e258214e3f921a516b36/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-460513",
	                "Source": "/var/lib/docker/volumes/functional-460513/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-460513",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-460513",
	                "name.minikube.sigs.k8s.io": "functional-460513",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "bee011508c270ebc2e408f73210ac3ca6232133e06ba77fc00469a23ae840d07",
	            "SandboxKey": "/var/run/docker/netns/bee011508c27",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33896"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33897"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33900"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33898"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33899"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-460513": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d6:74:65:19:66:d7",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "46436a08b18539b6074e0247d0c1aef98e52bada9514c01c857330a2e439d034",
	                    "EndpointID": "dc8104321418323876b9e2a21a7a9e8d25ae8fe4b72705ceac33234352c25405",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-460513",
	                        "b8078c0512be"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-460513 -n functional-460513
helpers_test.go:252: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-460513 logs -n 25: (1.226648138s)
helpers_test.go:260: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                            ARGS                                                                             │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-460513 image save kicbase/echo-server:functional-460513 /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 20:12 UTC │ 02 Oct 25 20:12 UTC │
	│ image          │ functional-460513 image rm kicbase/echo-server:functional-460513 --alsologtostderr                                                                          │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 20:12 UTC │ 02 Oct 25 20:12 UTC │
	│ image          │ functional-460513 image ls                                                                                                                                  │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 20:12 UTC │ 02 Oct 25 20:12 UTC │
	│ image          │ functional-460513 image load /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 20:12 UTC │ 02 Oct 25 20:12 UTC │
	│ image          │ functional-460513 image ls                                                                                                                                  │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 20:12 UTC │ 02 Oct 25 20:12 UTC │
	│ image          │ functional-460513 image save --daemon kicbase/echo-server:functional-460513 --alsologtostderr                                                               │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 20:12 UTC │ 02 Oct 25 20:12 UTC │
	│ docker-env     │ functional-460513 docker-env                                                                                                                                │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 20:12 UTC │ 02 Oct 25 20:12 UTC │
	│ docker-env     │ functional-460513 docker-env                                                                                                                                │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 20:12 UTC │ 02 Oct 25 20:12 UTC │
	│ ssh            │ functional-460513 ssh sudo cat /etc/test/nested/copy/882884/hosts                                                                                           │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 20:12 UTC │ 02 Oct 25 20:12 UTC │
	│ ssh            │ functional-460513 ssh sudo cat /etc/ssl/certs/882884.pem                                                                                                    │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 20:12 UTC │ 02 Oct 25 20:12 UTC │
	│ ssh            │ functional-460513 ssh sudo cat /usr/share/ca-certificates/882884.pem                                                                                        │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 20:12 UTC │ 02 Oct 25 20:12 UTC │
	│ ssh            │ functional-460513 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                    │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 20:12 UTC │ 02 Oct 25 20:12 UTC │
	│ ssh            │ functional-460513 ssh sudo cat /etc/ssl/certs/8828842.pem                                                                                                   │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 20:12 UTC │ 02 Oct 25 20:12 UTC │
	│ ssh            │ functional-460513 ssh sudo cat /usr/share/ca-certificates/8828842.pem                                                                                       │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 20:12 UTC │ 02 Oct 25 20:12 UTC │
	│ ssh            │ functional-460513 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                    │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 20:12 UTC │ 02 Oct 25 20:12 UTC │
	│ image          │ functional-460513 image ls --format short --alsologtostderr                                                                                                 │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 20:12 UTC │ 02 Oct 25 20:12 UTC │
	│ image          │ functional-460513 image ls --format yaml --alsologtostderr                                                                                                  │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 20:12 UTC │ 02 Oct 25 20:12 UTC │
	│ ssh            │ functional-460513 ssh pgrep buildkitd                                                                                                                       │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 20:12 UTC │                     │
	│ image          │ functional-460513 image build -t localhost/my-image:functional-460513 testdata/build --alsologtostderr                                                      │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 20:12 UTC │ 02 Oct 25 20:12 UTC │
	│ image          │ functional-460513 image ls                                                                                                                                  │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 20:12 UTC │ 02 Oct 25 20:12 UTC │
	│ image          │ functional-460513 image ls --format json --alsologtostderr                                                                                                  │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 20:12 UTC │ 02 Oct 25 20:12 UTC │
	│ image          │ functional-460513 image ls --format table --alsologtostderr                                                                                                 │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 20:12 UTC │ 02 Oct 25 20:12 UTC │
	│ update-context │ functional-460513 update-context --alsologtostderr -v=2                                                                                                     │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 20:12 UTC │ 02 Oct 25 20:12 UTC │
	│ update-context │ functional-460513 update-context --alsologtostderr -v=2                                                                                                     │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 20:12 UTC │ 02 Oct 25 20:12 UTC │
	│ update-context │ functional-460513 update-context --alsologtostderr -v=2                                                                                                     │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 20:12 UTC │ 02 Oct 25 20:12 UTC │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 20:08:20
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 20:08:20.364656  926478 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:08:20.364845  926478 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:08:20.364857  926478 out.go:374] Setting ErrFile to fd 2...
	I1002 20:08:20.364862  926478 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:08:20.365129  926478 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-881023/.minikube/bin
	I1002 20:08:20.365537  926478 out.go:368] Setting JSON to false
	I1002 20:08:20.366586  926478 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":17439,"bootTime":1759418262,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1002 20:08:20.366653  926478 start.go:140] virtualization:  
	I1002 20:08:20.369783  926478 out.go:179] * [functional-460513] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 20:08:20.373648  926478 out.go:179]   - MINIKUBE_LOCATION=21683
	I1002 20:08:20.373796  926478 notify.go:221] Checking for updates...
	I1002 20:08:20.379533  926478 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:08:20.382459  926478 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-881023/kubeconfig
	I1002 20:08:20.390635  926478 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-881023/.minikube
	I1002 20:08:20.394015  926478 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 20:08:20.396945  926478 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 20:08:20.400262  926478 config.go:182] Loaded profile config "functional-460513": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1002 20:08:20.400826  926478 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 20:08:20.430909  926478 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1002 20:08:20.431072  926478 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:08:20.490993  926478 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 20:08:20.481776437 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 20:08:20.491100  926478 docker.go:319] overlay module found
	I1002 20:08:20.494403  926478 out.go:179] * Using the docker driver based on existing profile
	I1002 20:08:20.497299  926478 start.go:306] selected driver: docker
	I1002 20:08:20.497320  926478 start.go:936] validating driver "docker" against &{Name:functional-460513 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-460513 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:08:20.497428  926478 start.go:947] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 20:08:20.497529  926478 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:08:20.550819  926478 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 20:08:20.541610835 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 20:08:20.551283  926478 cni.go:84] Creating CNI manager for ""
	I1002 20:08:20.551358  926478 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1002 20:08:20.551416  926478 start.go:350] cluster config:
	{Name:functional-460513 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-460513 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:08:20.554663  926478 out.go:179] * dry-run validation complete!
	
	
	==> Docker <==
	Oct 02 20:08:22 functional-460513 cri-dockerd[7470]: time="2025-10-02T20:08:22Z" level=info msg="Stop pulling image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: Pulling from kubernetesui/dashboard"
	Oct 02 20:08:22 functional-460513 dockerd[6691]: time="2025-10-02T20:08:22.732627175Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Oct 02 20:08:22 functional-460513 dockerd[6691]: time="2025-10-02T20:08:22.819240810Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 20:08:38 functional-460513 dockerd[6691]: time="2025-10-02T20:08:38.155639221Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Oct 02 20:08:38 functional-460513 dockerd[6691]: time="2025-10-02T20:08:38.244541496Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 20:08:38 functional-460513 dockerd[6691]: time="2025-10-02T20:08:38.294304625Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Oct 02 20:08:38 functional-460513 dockerd[6691]: time="2025-10-02T20:08:38.387000951Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 20:08:47 functional-460513 dockerd[6691]: time="2025-10-02T20:08:47.320794251Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 20:08:53 functional-460513 dockerd[6691]: time="2025-10-02T20:08:53.315418324Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 20:09:05 functional-460513 dockerd[6691]: time="2025-10-02T20:09:05.149055644Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Oct 02 20:09:05 functional-460513 dockerd[6691]: time="2025-10-02T20:09:05.245384626Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 20:09:08 functional-460513 dockerd[6691]: time="2025-10-02T20:09:08.152059959Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Oct 02 20:09:08 functional-460513 dockerd[6691]: time="2025-10-02T20:09:08.237768594Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 20:09:51 functional-460513 dockerd[6691]: time="2025-10-02T20:09:51.149901296Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Oct 02 20:09:51 functional-460513 dockerd[6691]: time="2025-10-02T20:09:51.344987903Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 20:09:51 functional-460513 cri-dockerd[7470]: time="2025-10-02T20:09:51Z" level=info msg="Stop pulling image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: Pulling from kubernetesui/metrics-scraper"
	Oct 02 20:09:58 functional-460513 dockerd[6691]: time="2025-10-02T20:09:58.148114294Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Oct 02 20:09:58 functional-460513 dockerd[6691]: time="2025-10-02T20:09:58.242409584Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 20:11:22 functional-460513 dockerd[6691]: time="2025-10-02T20:11:22.149780472Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Oct 02 20:11:22 functional-460513 dockerd[6691]: time="2025-10-02T20:11:22.236123492Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 20:11:25 functional-460513 dockerd[6691]: time="2025-10-02T20:11:25.139627309Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Oct 02 20:11:25 functional-460513 dockerd[6691]: time="2025-10-02T20:11:25.226663867Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 20:12:21 functional-460513 dockerd[6691]: 2025/10/02 20:12:21 http2: server: error reading preface from client @: read unix /var/run/docker.sock->@: read: connection reset by peer
	Oct 02 20:12:56 functional-460513 dockerd[6691]: time="2025-10-02T20:12:56.421622660Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 20:12:56 functional-460513 cri-dockerd[7470]: time="2025-10-02T20:12:56Z" level=info msg="Stop pulling image kicbase/echo-server:latest: latest: Pulling from kicbase/echo-server"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	623723a43c077       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   5 minutes ago       Exited              mount-munger              0                   d79e9c9417245       busybox-mount                               default
	bc256734b9fe8       nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8                         15 minutes ago      Running             nginx                     0                   9789396a1419a       nginx-svc                                   default
	fe224927b52c5       05baa95f5142d                                                                                         15 minutes ago      Running             kube-proxy                2                   3a14beb9805ac       kube-proxy-z7ghw                            kube-system
	33f4fa437242e       ba04bb24b9575                                                                                         15 minutes ago      Running             storage-provisioner       2                   ed81eec2d77f8       storage-provisioner                         kube-system
	02510aec7c38d       138784d87c9c5                                                                                         15 minutes ago      Running             coredns                   2                   afeaaf344747b       coredns-66bc5c9577-bb2ds                    kube-system
	26134bc61f5d9       a1894772a478e                                                                                         15 minutes ago      Running             etcd                      2                   4cf8116593459       etcd-functional-460513                      kube-system
	db9b1101b76eb       43911e833d64d                                                                                         15 minutes ago      Running             kube-apiserver            0                   29b883084068a       kube-apiserver-functional-460513            kube-system
	0e09036d7add9       b5f57ec6b9867                                                                                         15 minutes ago      Running             kube-scheduler            2                   bfb48aab841d2       kube-scheduler-functional-460513            kube-system
	5d710be832df0       7eb2c6ff0c5a7                                                                                         15 minutes ago      Running             kube-controller-manager   2                   605575f71f812       kube-controller-manager-functional-460513   kube-system
	11843acc93b83       138784d87c9c5                                                                                         17 minutes ago      Exited              coredns                   1                   c216e22a818d3       coredns-66bc5c9577-bb2ds                    kube-system
	5e610b6f5c956       ba04bb24b9575                                                                                         17 minutes ago      Exited              storage-provisioner       1                   d6bece758620b       storage-provisioner                         kube-system
	8013cb97c756c       05baa95f5142d                                                                                         17 minutes ago      Exited              kube-proxy                1                   6fa6f9b610fc1       kube-proxy-z7ghw                            kube-system
	5459180499bcd       b5f57ec6b9867                                                                                         17 minutes ago      Exited              kube-scheduler            1                   14d19245da307       kube-scheduler-functional-460513            kube-system
	4805d040cabcf       a1894772a478e                                                                                         17 minutes ago      Exited              etcd                      1                   b61a48faa350c       etcd-functional-460513                      kube-system
	cccbefc54d3cd       7eb2c6ff0c5a7                                                                                         17 minutes ago      Exited              kube-controller-manager   1                   b7d3e4afda29d       kube-controller-manager-functional-460513   kube-system
	
	
	==> coredns [02510aec7c38] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34079 - 21240 "HINFO IN 244857414700627593.4635503374353347991. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.021849786s
	
	
	==> coredns [11843acc93b8] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43425 - 16476 "HINFO IN 6420058890467486523.4324477465014152588. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020251933s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-460513
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-460513
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=77ec36ba275b9c51be897969b75e45bdcce52b4b
	                    minikube.k8s.io/name=functional-460513
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T19_55_07_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 19:55:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-460513
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 20:13:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 20:12:30 +0000   Thu, 02 Oct 2025 19:54:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 20:12:30 +0000   Thu, 02 Oct 2025 19:54:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 20:12:30 +0000   Thu, 02 Oct 2025 19:54:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 20:12:30 +0000   Thu, 02 Oct 2025 19:55:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-460513
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 383087b4c8744483b09343609d84322f
	  System UUID:                5b6ef310-3cb5-4b1c-978f-45f181f323cd
	  Boot ID:                    0abe58db-3afd-40ad-9a63-2ed98334b343
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-s8zx4                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  default                     hello-node-connect-7d85dfc575-85j8h           0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 coredns-66bc5c9577-bb2ds                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     18m
	  kube-system                 etcd-functional-460513                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         18m
	  kube-system                 kube-apiserver-functional-460513              250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-functional-460513     200m (10%)    0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-proxy-z7ghw                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-scheduler-functional-460513              100m (5%)     0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-s9ptn    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-dlfsg         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (2%)  170Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 18m                kube-proxy       
	  Normal   Starting                 15m                kube-proxy       
	  Normal   Starting                 16m                kube-proxy       
	  Normal   NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node functional-460513 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node functional-460513 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     18m (x7 over 18m)  kubelet          Node functional-460513 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     18m                kubelet          Node functional-460513 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 18m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  18m                kubelet          Node functional-460513 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    18m                kubelet          Node functional-460513 status is now: NodeHasNoDiskPressure
	  Normal   NodeReady                18m                kubelet          Node functional-460513 status is now: NodeReady
	  Normal   Starting                 18m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           18m                node-controller  Node functional-460513 event: Registered Node functional-460513 in Controller
	  Normal   RegisteredNode           16m                node-controller  Node functional-460513 event: Registered Node functional-460513 in Controller
	  Warning  ContainerGCFailed        16m (x2 over 17m)  kubelet          rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	  Normal   Starting                 15m                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node functional-460513 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node functional-460513 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 15m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node functional-460513 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           15m                node-controller  Node functional-460513 event: Registered Node functional-460513 in Controller
	
	
	==> dmesg <==
	[Oct 2 18:16] kauditd_printk_skb: 8 callbacks suppressed
	[Oct 2 19:46] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [26134bc61f5d] <==
	{"level":"warn","ts":"2025-10-02T19:57:30.255543Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T19:57:30.266504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T19:57:30.329264Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T19:57:30.337464Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T19:57:30.364171Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T19:57:30.390608Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T19:57:30.429996Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T19:57:30.451770Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T19:57:30.474863Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T19:57:30.502871Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T19:57:30.549653Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T19:57:30.571696Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T19:57:30.604458Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T19:57:30.632445Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T19:57:30.711817Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T19:57:30.749386Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T19:57:30.773166Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T19:57:30.809665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T19:57:30.895187Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44136","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-02T20:07:29.316623Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1154}
	{"level":"info","ts":"2025-10-02T20:07:29.339888Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1154,"took":"22.906662ms","hash":2845891145,"current-db-size-bytes":3248128,"current-db-size":"3.2 MB","current-db-size-in-use-bytes":1507328,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2025-10-02T20:07:29.339943Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2845891145,"revision":1154,"compact-revision":-1}
	{"level":"info","ts":"2025-10-02T20:12:29.323117Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1456}
	{"level":"info","ts":"2025-10-02T20:12:29.326934Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1456,"took":"3.289627ms","hash":2636041803,"current-db-size-bytes":3248128,"current-db-size":"3.2 MB","current-db-size-in-use-bytes":2297856,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2025-10-02T20:12:29.326982Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2636041803,"revision":1456,"compact-revision":1154}
	
	
	==> etcd [4805d040cabc] <==
	{"level":"warn","ts":"2025-10-02T19:56:24.649466Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T19:56:24.669130Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T19:56:24.692938Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T19:56:24.722613Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T19:56:24.742087Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T19:56:24.758015Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T19:56:24.852099Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56828","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-02T19:57:09.748218Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-02T19:57:09.748293Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-460513","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-10-02T19:57:09.748479Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-02T19:57:16.751072Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-02T19:57:16.753295Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T19:57:16.753526Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-10-02T19:57:16.754984Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-02T19:57:16.755181Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-02T19:57:16.756496Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-02T19:57:16.756697Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-02T19:57:16.756785Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-02T19:57:16.756962Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-02T19:57:16.757051Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-02T19:57:16.757145Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T19:57:16.760135Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-02T19:57:16.760313Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T19:57:16.760384Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-02T19:57:16.760510Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-460513","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 20:13:21 up  4:55,  0 user,  load average: 0.27, 0.42, 1.02
	Linux functional-460513 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [db9b1101b76e] <==
	I1002 19:57:31.921322       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1002 19:57:31.921533       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1002 19:57:31.921674       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1002 19:57:31.921966       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1002 19:57:31.927546       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1002 19:57:31.928922       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 19:57:31.930128       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1002 19:57:32.120722       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1002 19:57:32.628068       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1002 19:57:33.145257       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1002 19:57:33.146828       1 controller.go:667] quota admission added evaluator for: endpoints
	I1002 19:57:33.160804       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1002 19:57:33.783270       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1002 19:57:33.834297       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1002 19:57:33.873574       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 19:57:33.886358       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 19:57:35.224249       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1002 19:57:47.082177       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.105.248.125"}
	I1002 19:57:54.116376       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.111.89.238"}
	I1002 19:58:02.716762       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.98.203.206"}
	I1002 20:02:03.053370       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.106.214.198"}
	I1002 20:07:31.811789       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 20:08:21.578235       1 controller.go:667] quota admission added evaluator for: namespaces
	I1002 20:08:21.915300       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.21.165"}
	I1002 20:08:21.942619       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.97.196.152"}
	
	
	==> kube-controller-manager [5d710be832df] <==
	I1002 19:57:35.186679       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1002 19:57:35.189823       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1002 19:57:35.193902       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1002 19:57:35.195049       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1002 19:57:35.199815       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1002 19:57:35.210359       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1002 19:57:35.210452       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1002 19:57:35.213251       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1002 19:57:35.213478       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1002 19:57:35.213609       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1002 19:57:35.216464       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1002 19:57:35.216526       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1002 19:57:35.216569       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1002 19:57:35.217285       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 19:57:35.219381       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1002 19:57:35.224560       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	E1002 20:08:21.702788       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1002 20:08:21.714297       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1002 20:08:21.726643       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1002 20:08:21.727415       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1002 20:08:21.743362       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1002 20:08:21.750012       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1002 20:08:21.757003       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1002 20:08:21.762532       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1002 20:08:21.766458       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [cccbefc54d3c] <==
	I1002 19:56:29.302392       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1002 19:56:29.307085       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1002 19:56:29.310379       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1002 19:56:29.319631       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 19:56:29.322784       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1002 19:56:29.326064       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1002 19:56:29.328302       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1002 19:56:29.332298       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1002 19:56:29.332513       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1002 19:56:29.332362       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1002 19:56:29.332660       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1002 19:56:29.332336       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1002 19:56:29.333765       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1002 19:56:29.333841       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1002 19:56:29.337080       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1002 19:56:29.339430       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1002 19:56:29.339806       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1002 19:56:29.339958       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1002 19:56:29.340109       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1002 19:56:29.340280       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1002 19:56:29.340402       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1002 19:56:29.342935       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1002 19:56:29.345450       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1002 19:56:29.348097       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1002 19:56:29.368572       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	
	
	==> kube-proxy [8013cb97c756] <==
	I1002 19:56:26.293475       1 server_linux.go:53] "Using iptables proxy"
	I1002 19:56:26.542177       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 19:56:26.642813       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 19:56:26.642867       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1002 19:56:26.642965       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 19:56:26.881417       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 19:56:26.881477       1 server_linux.go:132] "Using iptables Proxier"
	I1002 19:56:26.953908       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 19:56:26.969663       1 server.go:527] "Version info" version="v1.34.1"
	I1002 19:56:26.969688       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 19:56:26.978549       1 config.go:200] "Starting service config controller"
	I1002 19:56:26.978578       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 19:56:27.018283       1 config.go:106] "Starting endpoint slice config controller"
	I1002 19:56:27.018304       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 19:56:27.018327       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 19:56:27.018332       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 19:56:27.018825       1 config.go:309] "Starting node config controller"
	I1002 19:56:27.018838       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 19:56:27.018845       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 19:56:27.079689       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 19:56:27.118988       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 19:56:27.119021       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [fe224927b52c] <==
	I1002 19:57:33.648761       1 server_linux.go:53] "Using iptables proxy"
	I1002 19:57:33.759095       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 19:57:33.860901       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 19:57:33.862375       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1002 19:57:33.862589       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 19:57:33.950063       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 19:57:33.953446       1 server_linux.go:132] "Using iptables Proxier"
	I1002 19:57:33.978599       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 19:57:33.978920       1 server.go:527] "Version info" version="v1.34.1"
	I1002 19:57:33.978939       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 19:57:33.981527       1 config.go:200] "Starting service config controller"
	I1002 19:57:33.981550       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 19:57:33.983742       1 config.go:106] "Starting endpoint slice config controller"
	I1002 19:57:33.983757       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 19:57:33.983780       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 19:57:33.983784       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 19:57:33.984516       1 config.go:309] "Starting node config controller"
	I1002 19:57:33.984523       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 19:57:33.984530       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 19:57:34.082480       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 19:57:34.085275       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 19:57:34.085312       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [0e09036d7add] <==
	I1002 19:57:31.457845       1 serving.go:386] Generated self-signed cert in-memory
	I1002 19:57:33.347221       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1002 19:57:33.347258       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 19:57:33.354010       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1002 19:57:33.354105       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1002 19:57:33.354127       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1002 19:57:33.356128       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1002 19:57:33.366203       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 19:57:33.366227       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 19:57:33.366246       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 19:57:33.366252       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 19:57:33.454928       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1002 19:57:33.467906       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 19:57:33.467992       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [5459180499bc] <==
	I1002 19:56:24.387641       1 serving.go:386] Generated self-signed cert in-memory
	I1002 19:56:26.440161       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1002 19:56:26.440199       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 19:56:26.454187       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1002 19:56:26.454290       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1002 19:56:26.455734       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1002 19:56:26.461260       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1002 19:56:26.461736       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 19:56:26.461761       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 19:56:26.461780       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 19:56:26.461791       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 19:56:26.562274       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 19:56:26.562343       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1002 19:56:26.562445       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 19:57:09.734861       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1002 19:57:09.734884       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1002 19:57:09.734919       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1002 19:57:09.734950       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 19:57:09.734971       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	I1002 19:57:09.735006       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 19:57:09.735269       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1002 19:57:09.735298       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 02 20:12:26 functional-460513 kubelet[7846]: E1002 20:12:26.102902    7846 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="27481bf8-0750-44a3-93cc-d73e2662010e"
	Oct 02 20:12:28 functional-460513 kubelet[7846]: E1002 20:12:28.099796    7846 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-s8zx4" podUID="7076f721-7fec-48cb-b884-2ff8c9abbcd2"
	Oct 02 20:12:29 functional-460513 kubelet[7846]: E1002 20:12:29.101388    7846 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-s9ptn" podUID="af01a2e9-cd27-4c95-a09b-56995a56ee5a"
	Oct 02 20:12:33 functional-460513 kubelet[7846]: E1002 20:12:33.099601    7846 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-85j8h" podUID="7456d5e9-502e-455b-9ac7-aed4d302fe22"
	Oct 02 20:12:35 functional-460513 kubelet[7846]: E1002 20:12:35.101601    7846 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dlfsg" podUID="e55b6bd9-93b5-47cf-bb1d-cc5a9e41aa9a"
	Oct 02 20:12:41 functional-460513 kubelet[7846]: E1002 20:12:41.100289    7846 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="27481bf8-0750-44a3-93cc-d73e2662010e"
	Oct 02 20:12:41 functional-460513 kubelet[7846]: E1002 20:12:41.100329    7846 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-s8zx4" podUID="7076f721-7fec-48cb-b884-2ff8c9abbcd2"
	Oct 02 20:12:43 functional-460513 kubelet[7846]: E1002 20:12:43.101553    7846 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-s9ptn" podUID="af01a2e9-cd27-4c95-a09b-56995a56ee5a"
	Oct 02 20:12:47 functional-460513 kubelet[7846]: E1002 20:12:47.102192    7846 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dlfsg" podUID="e55b6bd9-93b5-47cf-bb1d-cc5a9e41aa9a"
	Oct 02 20:12:48 functional-460513 kubelet[7846]: E1002 20:12:48.099802    7846 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-85j8h" podUID="7456d5e9-502e-455b-9ac7-aed4d302fe22"
	Oct 02 20:12:56 functional-460513 kubelet[7846]: E1002 20:12:56.103437    7846 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="27481bf8-0750-44a3-93cc-d73e2662010e"
	Oct 02 20:12:56 functional-460513 kubelet[7846]: E1002 20:12:56.426033    7846 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Oct 02 20:12:56 functional-460513 kubelet[7846]: E1002 20:12:56.426089    7846 kuberuntime_image.go:43] "Failed to pull image" err="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Oct 02 20:12:56 functional-460513 kubelet[7846]: E1002 20:12:56.426173    7846 kuberuntime_manager.go:1449] "Unhandled Error" err="container echo-server start failed in pod hello-node-75c85bcc94-s8zx4_default(7076f721-7fec-48cb-b884-2ff8c9abbcd2): ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 02 20:12:56 functional-460513 kubelet[7846]: E1002 20:12:56.426204    7846 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-s8zx4" podUID="7076f721-7fec-48cb-b884-2ff8c9abbcd2"
	Oct 02 20:12:57 functional-460513 kubelet[7846]: E1002 20:12:57.101973    7846 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-s9ptn" podUID="af01a2e9-cd27-4c95-a09b-56995a56ee5a"
	Oct 02 20:12:59 functional-460513 kubelet[7846]: E1002 20:12:59.099435    7846 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-85j8h" podUID="7456d5e9-502e-455b-9ac7-aed4d302fe22"
	Oct 02 20:13:00 functional-460513 kubelet[7846]: E1002 20:13:00.155823    7846 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dlfsg" podUID="e55b6bd9-93b5-47cf-bb1d-cc5a9e41aa9a"
	Oct 02 20:13:09 functional-460513 kubelet[7846]: E1002 20:13:09.099514    7846 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-s8zx4" podUID="7076f721-7fec-48cb-b884-2ff8c9abbcd2"
	Oct 02 20:13:09 functional-460513 kubelet[7846]: E1002 20:13:09.101570    7846 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-s9ptn" podUID="af01a2e9-cd27-4c95-a09b-56995a56ee5a"
	Oct 02 20:13:10 functional-460513 kubelet[7846]: E1002 20:13:10.100619    7846 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="27481bf8-0750-44a3-93cc-d73e2662010e"
	Oct 02 20:13:11 functional-460513 kubelet[7846]: E1002 20:13:11.099475    7846 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-85j8h" podUID="7456d5e9-502e-455b-9ac7-aed4d302fe22"
	Oct 02 20:13:13 functional-460513 kubelet[7846]: E1002 20:13:13.101293    7846 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dlfsg" podUID="e55b6bd9-93b5-47cf-bb1d-cc5a9e41aa9a"
	Oct 02 20:13:20 functional-460513 kubelet[7846]: E1002 20:13:20.099817    7846 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-s8zx4" podUID="7076f721-7fec-48cb-b884-2ff8c9abbcd2"
	Oct 02 20:13:22 functional-460513 kubelet[7846]: E1002 20:13:22.108726    7846 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="27481bf8-0750-44a3-93cc-d73e2662010e"
	
	
	==> storage-provisioner [33f4fa437242] <==
	W1002 20:12:57.411911       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:12:59.415354       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:12:59.422414       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:13:01.426365       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:13:01.433453       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:13:03.436209       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:13:03.441114       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:13:05.443954       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:13:05.451253       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:13:07.454929       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:13:07.459617       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:13:09.462936       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:13:09.467436       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:13:11.470747       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:13:11.475418       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:13:13.478366       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:13:13.485586       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:13:15.488864       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:13:15.493573       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:13:17.496574       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:13:17.501439       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:13:19.509625       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:13:19.515589       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:13:21.518601       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:13:21.524275       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [5e610b6f5c95] <==
	W1002 19:56:45.555569       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:56:45.562256       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:56:47.582004       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:56:47.589875       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:56:49.592960       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:56:49.598092       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:56:51.600865       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:56:51.608664       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:56:53.612538       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:56:53.618914       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:56:55.622089       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:56:55.627118       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:56:57.630513       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:56:57.635573       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:56:59.638282       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:56:59.643254       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:57:01.646302       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:57:01.653892       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:57:03.657134       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:57:03.662085       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:57:05.665042       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:57:05.669486       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:57:07.672971       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:57:07.679989       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	E1002 19:57:09.680844       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-460513 -n functional-460513
helpers_test.go:269: (dbg) Run:  kubectl --context functional-460513 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-s8zx4 hello-node-connect-7d85dfc575-85j8h sp-pod dashboard-metrics-scraper-77bf4d6c4c-s9ptn kubernetes-dashboard-855c9754f9-dlfsg
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-460513 describe pod busybox-mount hello-node-75c85bcc94-s8zx4 hello-node-connect-7d85dfc575-85j8h sp-pod dashboard-metrics-scraper-77bf4d6c4c-s9ptn kubernetes-dashboard-855c9754f9-dlfsg
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-460513 describe pod busybox-mount hello-node-75c85bcc94-s8zx4 hello-node-connect-7d85dfc575-85j8h sp-pod dashboard-metrics-scraper-77bf4d6c4c-s9ptn kubernetes-dashboard-855c9754f9-dlfsg: exit status 1 (117.870372ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-460513/192.168.49.2
	Start Time:       Thu, 02 Oct 2025 20:08:09 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  mount-munger:
	    Container ID:  docker://623723a43c0770107afec46f91c2942c306af901014995772650f46fc90a1257
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Thu, 02 Oct 2025 20:08:11 +0000
	      Finished:     Thu, 02 Oct 2025 20:08:11 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dtrfr (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-dtrfr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  5m13s  default-scheduler  Successfully assigned default/busybox-mount to functional-460513
	  Normal  Pulling    5m13s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     5m11s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.134s (2.134s including waiting). Image size: 3547125 bytes.
	  Normal  Created    5m11s  kubelet            Created container: mount-munger
	  Normal  Started    5m11s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-s8zx4
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-460513/192.168.49.2
	Start Time:       Thu, 02 Oct 2025 20:02:02 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:           10.244.0.10
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cg2d6 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-cg2d6:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  11m                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-s8zx4 to functional-460513
	  Warning  Failed     9m53s (x3 over 11m)  kubelet            Failed to pull image "kicbase/echo-server": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    8m27s (x5 over 11m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     8m27s (x5 over 11m)  kubelet            Error: ErrImagePull
	  Warning  Failed     8m27s (x2 over 11m)  kubelet            Failed to pull image "kicbase/echo-server": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    78s (x43 over 11m)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     78s (x43 over 11m)   kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-85j8h
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-460513/192.168.49.2
	Start Time:       Thu, 02 Oct 2025 19:58:02 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:           10.244.0.9
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ps69t (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-ps69t:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  15m                 default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-85j8h to functional-460513
	  Normal   Pulling    12m (x5 over 15m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     12m (x5 over 15m)   kubelet            Failed to pull image "kicbase/echo-server": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     12m (x5 over 15m)   kubelet            Error: ErrImagePull
	  Normal   BackOff    11s (x64 over 15m)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     11s (x64 over 15m)  kubelet            Error: ImagePullBackOff
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-460513/192.168.49.2
	Start Time:       Thu, 02 Oct 2025 19:57:59 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8qj7g (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-8qj7g:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  15m                 default-scheduler  Successfully assigned default/sp-pod to functional-460513
	  Warning  Failed     14m (x3 over 15m)   kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    12m (x5 over 15m)   kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     12m (x5 over 15m)   kubelet            Error: ErrImagePull
	  Warning  Failed     12m (x2 over 14m)   kubelet            Failed to pull image "docker.io/nginx": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    12s (x65 over 15m)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     12s (x65 over 15m)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-s9ptn" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-dlfsg" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-460513 describe pod busybox-mount hello-node-75c85bcc94-s8zx4 hello-node-connect-7d85dfc575-85j8h sp-pod dashboard-metrics-scraper-77bf4d6c4c-s9ptn kubernetes-dashboard-855c9754f9-dlfsg: exit status 1
--- FAIL: TestFunctional/parallel/DashboardCmd (302.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-460513 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-460513 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-85j8h" [7456d5e9-502e-455b-9ac7-aed4d302fe22] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E1002 19:58:16.140411  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/addons-660088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:00:32.271999  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/addons-660088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:00:59.982321  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/addons-660088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-460513 -n functional-460513
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-10-02 20:08:03.078487362 +0000 UTC m=+1257.493261959
functional_test.go:1645: (dbg) Run:  kubectl --context functional-460513 describe po hello-node-connect-7d85dfc575-85j8h -n default
functional_test.go:1645: (dbg) kubectl --context functional-460513 describe po hello-node-connect-7d85dfc575-85j8h -n default:
Name:             hello-node-connect-7d85dfc575-85j8h
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-460513/192.168.49.2
Start Time:       Thu, 02 Oct 2025 19:58:02 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ps69t (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-ps69t:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-85j8h to functional-460513
Normal   Pulling    6m57s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m57s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     6m57s (x5 over 10m)   kubelet            Error: ErrImagePull
Normal   BackOff    4m47s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m47s (x21 over 10m)  kubelet            Error: ImagePullBackOff
functional_test.go:1645: (dbg) Run:  kubectl --context functional-460513 logs hello-node-connect-7d85dfc575-85j8h -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-460513 logs hello-node-connect-7d85dfc575-85j8h -n default: exit status 1 (102.586094ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-85j8h" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-460513 logs hello-node-connect-7d85dfc575-85j8h -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-460513 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-85j8h
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-460513/192.168.49.2
Start Time:       Thu, 02 Oct 2025 19:58:02 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ps69t (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-ps69t:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-85j8h to functional-460513
Normal   Pulling    6m57s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m57s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     6m57s (x5 over 10m)   kubelet            Error: ErrImagePull
Normal   BackOff    4m47s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m47s (x21 over 10m)  kubelet            Error: ImagePullBackOff

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-460513 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-460513 logs -l app=hello-node-connect: exit status 1 (98.50386ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-85j8h" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-460513 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-460513 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.98.203.206
IPs:                      10.98.203.206
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31726/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-460513
helpers_test.go:243: (dbg) docker inspect functional-460513:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b8078c0512be5e879b5134e27e6af3103f35cf57b611aaf29468582bfa479b7e",
	        "Created": "2025-10-02T19:54:34.194287273Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 908898,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T19:54:34.236525194Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/b8078c0512be5e879b5134e27e6af3103f35cf57b611aaf29468582bfa479b7e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b8078c0512be5e879b5134e27e6af3103f35cf57b611aaf29468582bfa479b7e/hostname",
	        "HostsPath": "/var/lib/docker/containers/b8078c0512be5e879b5134e27e6af3103f35cf57b611aaf29468582bfa479b7e/hosts",
	        "LogPath": "/var/lib/docker/containers/b8078c0512be5e879b5134e27e6af3103f35cf57b611aaf29468582bfa479b7e/b8078c0512be5e879b5134e27e6af3103f35cf57b611aaf29468582bfa479b7e-json.log",
	        "Name": "/functional-460513",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-460513:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-460513",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b8078c0512be5e879b5134e27e6af3103f35cf57b611aaf29468582bfa479b7e",
	                "LowerDir": "/var/lib/docker/overlay2/cb2c9b449a3d89e392b79bf1325d8b59cc262f54a697e258214e3f921a516b36-init/diff:/var/lib/docker/overlay2/4168a6b35c0191bd222903a9b469ebe18ea5b9d5b6daa344f4a494c07b59f9f7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cb2c9b449a3d89e392b79bf1325d8b59cc262f54a697e258214e3f921a516b36/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cb2c9b449a3d89e392b79bf1325d8b59cc262f54a697e258214e3f921a516b36/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cb2c9b449a3d89e392b79bf1325d8b59cc262f54a697e258214e3f921a516b36/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-460513",
	                "Source": "/var/lib/docker/volumes/functional-460513/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-460513",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-460513",
	                "name.minikube.sigs.k8s.io": "functional-460513",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "bee011508c270ebc2e408f73210ac3ca6232133e06ba77fc00469a23ae840d07",
	            "SandboxKey": "/var/run/docker/netns/bee011508c27",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33896"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33897"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33900"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33898"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33899"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-460513": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d6:74:65:19:66:d7",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "46436a08b18539b6074e0247d0c1aef98e52bada9514c01c857330a2e439d034",
	                    "EndpointID": "dc8104321418323876b9e2a21a7a9e8d25ae8fe4b72705ceac33234352c25405",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-460513",
	                        "b8078c0512be"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-460513 -n functional-460513
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-460513 logs -n 25: (1.207590741s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                            ARGS                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-460513 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                    │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 19:56 UTC │ 02 Oct 25 19:56 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                           │ minikube          │ jenkins │ v1.37.0 │ 02 Oct 25 19:56 UTC │ 02 Oct 25 19:56 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 02 Oct 25 19:56 UTC │ 02 Oct 25 19:56 UTC │
	│ kubectl │ functional-460513 kubectl -- --context functional-460513 get pods                                                          │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 19:56 UTC │ 02 Oct 25 19:56 UTC │
	│ start   │ -p functional-460513 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                   │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 19:56 UTC │ 02 Oct 25 19:57 UTC │
	│ service │ invalid-svc -p functional-460513                                                                                           │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 19:57 UTC │                     │
	│ config  │ functional-460513 config unset cpus                                                                                        │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 19:57 UTC │ 02 Oct 25 19:57 UTC │
	│ cp      │ functional-460513 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                         │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 19:57 UTC │ 02 Oct 25 19:57 UTC │
	│ config  │ functional-460513 config get cpus                                                                                          │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 19:57 UTC │                     │
	│ config  │ functional-460513 config set cpus 2                                                                                        │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 19:57 UTC │ 02 Oct 25 19:57 UTC │
	│ config  │ functional-460513 config get cpus                                                                                          │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 19:57 UTC │ 02 Oct 25 19:57 UTC │
	│ config  │ functional-460513 config unset cpus                                                                                        │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 19:57 UTC │ 02 Oct 25 19:57 UTC │
	│ ssh     │ functional-460513 ssh -n functional-460513 sudo cat /home/docker/cp-test.txt                                               │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 19:57 UTC │ 02 Oct 25 19:57 UTC │
	│ config  │ functional-460513 config get cpus                                                                                          │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 19:57 UTC │                     │
	│ ssh     │ functional-460513 ssh echo hello                                                                                           │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 19:57 UTC │ 02 Oct 25 19:57 UTC │
	│ cp      │ functional-460513 cp functional-460513:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd4024348198/001/cp-test.txt │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 19:57 UTC │ 02 Oct 25 19:57 UTC │
	│ ssh     │ functional-460513 ssh cat /etc/hostname                                                                                    │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 19:57 UTC │ 02 Oct 25 19:57 UTC │
	│ ssh     │ functional-460513 ssh -n functional-460513 sudo cat /home/docker/cp-test.txt                                               │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 19:57 UTC │ 02 Oct 25 19:57 UTC │
	│ tunnel  │ functional-460513 tunnel --alsologtostderr                                                                                 │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 19:57 UTC │                     │
	│ tunnel  │ functional-460513 tunnel --alsologtostderr                                                                                 │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 19:57 UTC │                     │
	│ cp      │ functional-460513 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                  │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 19:57 UTC │ 02 Oct 25 19:57 UTC │
	│ tunnel  │ functional-460513 tunnel --alsologtostderr                                                                                 │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 19:57 UTC │                     │
	│ ssh     │ functional-460513 ssh -n functional-460513 sudo cat /tmp/does/not/exist/cp-test.txt                                        │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 19:57 UTC │ 02 Oct 25 19:57 UTC │
	│ addons  │ functional-460513 addons list                                                                                              │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 19:58 UTC │ 02 Oct 25 19:58 UTC │
	│ addons  │ functional-460513 addons list -o json                                                                                      │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 19:58 UTC │ 02 Oct 25 19:58 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 19:56:44
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 19:56:44.872327  916052 out.go:360] Setting OutFile to fd 1 ...
	I1002 19:56:44.872466  916052 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 19:56:44.872469  916052 out.go:374] Setting ErrFile to fd 2...
	I1002 19:56:44.872472  916052 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 19:56:44.872744  916052 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-881023/.minikube/bin
	I1002 19:56:44.873121  916052 out.go:368] Setting JSON to false
	I1002 19:56:44.874123  916052 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":16743,"bootTime":1759418262,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1002 19:56:44.874184  916052 start.go:140] virtualization:  
	I1002 19:56:44.877511  916052 out.go:179] * [functional-460513] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 19:56:44.881444  916052 out.go:179]   - MINIKUBE_LOCATION=21683
	I1002 19:56:44.881596  916052 notify.go:221] Checking for updates...
	I1002 19:56:44.887366  916052 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 19:56:44.890262  916052 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-881023/kubeconfig
	I1002 19:56:44.893120  916052 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-881023/.minikube
	I1002 19:56:44.896075  916052 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 19:56:44.899128  916052 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 19:56:44.902531  916052 config.go:182] Loaded profile config "functional-460513": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1002 19:56:44.902635  916052 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 19:56:44.934419  916052 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1002 19:56:44.934528  916052 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 19:56:45.019377  916052 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-10-02 19:56:44.98857067 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 19:56:45.019523  916052 docker.go:319] overlay module found
	I1002 19:56:45.047367  916052 out.go:179] * Using the docker driver based on existing profile
	I1002 19:56:45.059739  916052 start.go:306] selected driver: docker
	I1002 19:56:45.059753  916052 start.go:936] validating driver "docker" against &{Name:functional-460513 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-460513 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 19:56:45.059874  916052 start.go:947] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 19:56:45.059994  916052 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 19:56:45.178283  916052 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-10-02 19:56:45.163360572 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 19:56:45.179029  916052 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 19:56:45.179068  916052 cni.go:84] Creating CNI manager for ""
	I1002 19:56:45.179137  916052 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1002 19:56:45.179193  916052 start.go:350] cluster config:
	{Name:functional-460513 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-460513 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 19:56:45.182494  916052 out.go:179] * Starting "functional-460513" primary control-plane node in "functional-460513" cluster
	I1002 19:56:45.185622  916052 cache.go:124] Beginning downloading kic base image for docker with docker
	I1002 19:56:45.191496  916052 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 19:56:45.197143  916052 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1002 19:56:45.197242  916052 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-881023/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-arm64.tar.lz4
	I1002 19:56:45.197253  916052 cache.go:59] Caching tarball of preloaded images
	I1002 19:56:45.197254  916052 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 19:56:45.197400  916052 preload.go:233] Found /home/jenkins/minikube-integration/21683-881023/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1002 19:56:45.197410  916052 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on docker
	I1002 19:56:45.197535  916052 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/functional-460513/config.json ...
	I1002 19:56:45.220607  916052 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 19:56:45.220620  916052 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 19:56:45.220632  916052 cache.go:233] Successfully downloaded all kic artifacts
	I1002 19:56:45.220662  916052 start.go:361] acquireMachinesLock for functional-460513: {Name:mk2abc62e3e0e90e6ec072747a3bff43c0103c14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 19:56:45.220728  916052 start.go:365] duration metric: took 44.439µs to acquireMachinesLock for "functional-460513"
	I1002 19:56:45.220749  916052 start.go:97] Skipping create...Using existing machine configuration
	I1002 19:56:45.220754  916052 fix.go:55] fixHost starting: 
	I1002 19:56:45.221039  916052 cli_runner.go:164] Run: docker container inspect functional-460513 --format={{.State.Status}}
	I1002 19:56:45.253214  916052 fix.go:113] recreateIfNeeded on functional-460513: state=Running err=<nil>
	W1002 19:56:45.253260  916052 fix.go:139] unexpected machine state, will restart: <nil>
	I1002 19:56:45.257346  916052 out.go:252] * Updating the running docker "functional-460513" container ...
	I1002 19:56:45.257388  916052 machine.go:93] provisionDockerMachine start ...
	I1002 19:56:45.257511  916052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-460513
	I1002 19:56:45.289314  916052 main.go:141] libmachine: Using SSH client type: native
	I1002 19:56:45.289671  916052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33896 <nil> <nil>}
	I1002 19:56:45.289679  916052 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 19:56:45.485835  916052 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-460513
	
	I1002 19:56:45.485863  916052 ubuntu.go:182] provisioning hostname "functional-460513"
	I1002 19:56:45.485934  916052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-460513
	I1002 19:56:45.506368  916052 main.go:141] libmachine: Using SSH client type: native
	I1002 19:56:45.506662  916052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33896 <nil> <nil>}
	I1002 19:56:45.506671  916052 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-460513 && echo "functional-460513" | sudo tee /etc/hostname
	I1002 19:56:45.661345  916052 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-460513
	
	I1002 19:56:45.661415  916052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-460513
	I1002 19:56:45.680967  916052 main.go:141] libmachine: Using SSH client type: native
	I1002 19:56:45.681377  916052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33896 <nil> <nil>}
	I1002 19:56:45.681393  916052 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-460513' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-460513/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-460513' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 19:56:45.817758  916052 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 19:56:45.817777  916052 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-881023/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-881023/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-881023/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-881023/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-881023/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-881023/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-881023/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-881023/.minikube}
	I1002 19:56:45.817798  916052 ubuntu.go:190] setting up certificates
	I1002 19:56:45.817811  916052 provision.go:84] configureAuth start
	I1002 19:56:45.817877  916052 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-460513
	I1002 19:56:45.835746  916052 provision.go:143] copyHostCerts
	I1002 19:56:45.835824  916052 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-881023/.minikube/ca.pem, removing ...
	I1002 19:56:45.835843  916052 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-881023/.minikube/ca.pem
	I1002 19:56:45.835926  916052 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-881023/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-881023/.minikube/ca.pem (1078 bytes)
	I1002 19:56:45.836025  916052 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-881023/.minikube/cert.pem, removing ...
	I1002 19:56:45.836029  916052 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-881023/.minikube/cert.pem
	I1002 19:56:45.836053  916052 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-881023/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-881023/.minikube/cert.pem (1123 bytes)
	I1002 19:56:45.836124  916052 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-881023/.minikube/key.pem, removing ...
	I1002 19:56:45.836128  916052 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-881023/.minikube/key.pem
	I1002 19:56:45.836152  916052 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-881023/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-881023/.minikube/key.pem (1675 bytes)
	I1002 19:56:45.836197  916052 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-881023/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-881023/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-881023/.minikube/certs/ca-key.pem org=jenkins.functional-460513 san=[127.0.0.1 192.168.49.2 functional-460513 localhost minikube]
	I1002 19:56:46.761305  916052 provision.go:177] copyRemoteCerts
	I1002 19:56:46.761360  916052 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 19:56:46.761410  916052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-460513
	I1002 19:56:46.778690  916052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33896 SSHKeyPath:/home/jenkins/minikube-integration/21683-881023/.minikube/machines/functional-460513/id_rsa Username:docker}
	I1002 19:56:46.873703  916052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-881023/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 19:56:46.892842  916052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-881023/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1002 19:56:46.911944  916052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-881023/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 19:56:46.930484  916052 provision.go:87] duration metric: took 1.112649737s to configureAuth
	I1002 19:56:46.930502  916052 ubuntu.go:206] setting minikube options for container-runtime
	I1002 19:56:46.930705  916052 config.go:182] Loaded profile config "functional-460513": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1002 19:56:46.930754  916052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-460513
	I1002 19:56:46.947856  916052 main.go:141] libmachine: Using SSH client type: native
	I1002 19:56:46.948172  916052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33896 <nil> <nil>}
	I1002 19:56:46.948179  916052 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1002 19:56:47.083523  916052 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1002 19:56:47.083534  916052 ubuntu.go:71] root file system type: overlay
	I1002 19:56:47.083643  916052 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1002 19:56:47.083710  916052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-460513
	I1002 19:56:47.104561  916052 main.go:141] libmachine: Using SSH client type: native
	I1002 19:56:47.104875  916052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33896 <nil> <nil>}
	I1002 19:56:47.104954  916052 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1002 19:56:47.263509  916052 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1002 19:56:47.263594  916052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-460513
	I1002 19:56:47.281253  916052 main.go:141] libmachine: Using SSH client type: native
	I1002 19:56:47.281577  916052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33896 <nil> <nil>}
	I1002 19:56:47.281593  916052 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1002 19:56:47.430740  916052 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 19:56:47.430755  916052 machine.go:96] duration metric: took 2.173359689s to provisionDockerMachine
	I1002 19:56:47.430765  916052 start.go:294] postStartSetup for "functional-460513" (driver="docker")
	I1002 19:56:47.430774  916052 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 19:56:47.430837  916052 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 19:56:47.430877  916052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-460513
	I1002 19:56:47.448810  916052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33896 SSHKeyPath:/home/jenkins/minikube-integration/21683-881023/.minikube/machines/functional-460513/id_rsa Username:docker}
	I1002 19:56:47.550492  916052 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 19:56:47.554365  916052 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 19:56:47.554384  916052 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 19:56:47.554395  916052 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-881023/.minikube/addons for local assets ...
	I1002 19:56:47.554456  916052 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-881023/.minikube/files for local assets ...
	I1002 19:56:47.554532  916052 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-881023/.minikube/files/etc/ssl/certs/8828842.pem -> 8828842.pem in /etc/ssl/certs
	I1002 19:56:47.554609  916052 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-881023/.minikube/files/etc/test/nested/copy/882884/hosts -> hosts in /etc/test/nested/copy/882884
	I1002 19:56:47.554658  916052 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/882884
	I1002 19:56:47.563202  916052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-881023/.minikube/files/etc/ssl/certs/8828842.pem --> /etc/ssl/certs/8828842.pem (1708 bytes)
	I1002 19:56:47.591153  916052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-881023/.minikube/files/etc/test/nested/copy/882884/hosts --> /etc/test/nested/copy/882884/hosts (40 bytes)
	I1002 19:56:47.612344  916052 start.go:297] duration metric: took 181.562941ms for postStartSetup
	I1002 19:56:47.612432  916052 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 19:56:47.612483  916052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-460513
	I1002 19:56:47.630560  916052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33896 SSHKeyPath:/home/jenkins/minikube-integration/21683-881023/.minikube/machines/functional-460513/id_rsa Username:docker}
	I1002 19:56:47.727148  916052 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 19:56:47.732297  916052 fix.go:57] duration metric: took 2.51153522s for fixHost
	I1002 19:56:47.732313  916052 start.go:84] releasing machines lock for "functional-460513", held for 2.511577953s
	I1002 19:56:47.732397  916052 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-460513
	I1002 19:56:47.749920  916052 ssh_runner.go:195] Run: cat /version.json
	I1002 19:56:47.749963  916052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-460513
	I1002 19:56:47.750206  916052 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 19:56:47.750254  916052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-460513
	I1002 19:56:47.768735  916052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33896 SSHKeyPath:/home/jenkins/minikube-integration/21683-881023/.minikube/machines/functional-460513/id_rsa Username:docker}
	I1002 19:56:47.768881  916052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33896 SSHKeyPath:/home/jenkins/minikube-integration/21683-881023/.minikube/machines/functional-460513/id_rsa Username:docker}
	I1002 19:56:47.955881  916052 ssh_runner.go:195] Run: systemctl --version
	I1002 19:56:47.962571  916052 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 19:56:47.967138  916052 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 19:56:47.967213  916052 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 19:56:47.975778  916052 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 19:56:47.975795  916052 start.go:496] detecting cgroup driver to use...
	I1002 19:56:47.975828  916052 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 19:56:47.975934  916052 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 19:56:47.990871  916052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1002 19:56:48.002873  916052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1002 19:56:48.016096  916052 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1002 19:56:48.016176  916052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1002 19:56:48.026805  916052 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1002 19:56:48.036288  916052 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1002 19:56:48.046904  916052 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1002 19:56:48.056016  916052 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 19:56:48.064497  916052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1002 19:56:48.074466  916052 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1002 19:56:48.083892  916052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1002 19:56:48.094614  916052 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 19:56:48.102766  916052 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 19:56:48.110993  916052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 19:56:48.256064  916052 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1002 19:56:48.473892  916052 start.go:496] detecting cgroup driver to use...
	I1002 19:56:48.473929  916052 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 19:56:48.473977  916052 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1002 19:56:48.489074  916052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 19:56:48.505599  916052 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 19:56:48.532938  916052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 19:56:48.548902  916052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1002 19:56:48.562324  916052 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 19:56:48.577056  916052 ssh_runner.go:195] Run: which cri-dockerd
	I1002 19:56:48.580858  916052 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1002 19:56:48.588702  916052 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1002 19:56:48.601726  916052 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1002 19:56:48.741632  916052 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1002 19:56:48.881414  916052 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1002 19:56:48.881510  916052 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1002 19:56:48.896863  916052 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1002 19:56:48.910259  916052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 19:56:49.051494  916052 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1002 19:57:20.092321  916052 ssh_runner.go:235] Completed: sudo systemctl restart docker: (31.040803163s)
	I1002 19:57:20.092388  916052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 19:57:20.113529  916052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1002 19:57:20.131321  916052 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1002 19:57:20.165315  916052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1002 19:57:20.179820  916052 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1002 19:57:20.303986  916052 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1002 19:57:20.420937  916052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 19:57:20.546067  916052 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1002 19:57:20.562520  916052 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1002 19:57:20.576159  916052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 19:57:20.708327  916052 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1002 19:57:20.791258  916052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1002 19:57:20.806148  916052 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1002 19:57:20.806204  916052 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1002 19:57:20.810212  916052 start.go:564] Will wait 60s for crictl version
	I1002 19:57:20.810265  916052 ssh_runner.go:195] Run: which crictl
	I1002 19:57:20.813794  916052 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 19:57:20.841754  916052 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I1002 19:57:20.841814  916052 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1002 19:57:20.864797  916052 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1002 19:57:20.890396  916052 out.go:252] * Preparing Kubernetes v1.34.1 on Docker 28.4.0 ...
	I1002 19:57:20.890484  916052 cli_runner.go:164] Run: docker network inspect functional-460513 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 19:57:20.906438  916052 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 19:57:20.913290  916052 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1002 19:57:20.916144  916052 kubeadm.go:883] updating cluster {Name:functional-460513 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-460513 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 19:57:20.916275  916052 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1002 19:57:20.916350  916052 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1002 19:57:20.934155  916052 docker.go:691] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-460513
	registry.k8s.io/kube-apiserver:v1.34.1
	registry.k8s.io/kube-scheduler:v1.34.1
	registry.k8s.io/kube-controller-manager:v1.34.1
	registry.k8s.io/kube-proxy:v1.34.1
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1002 19:57:20.934166  916052 docker.go:621] Images already preloaded, skipping extraction
	I1002 19:57:20.934226  916052 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1002 19:57:20.953899  916052 docker.go:691] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-460513
	registry.k8s.io/kube-apiserver:v1.34.1
	registry.k8s.io/kube-scheduler:v1.34.1
	registry.k8s.io/kube-controller-manager:v1.34.1
	registry.k8s.io/kube-proxy:v1.34.1
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1002 19:57:20.953928  916052 cache_images.go:85] Images are preloaded, skipping loading
	I1002 19:57:20.953936  916052 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 docker true true} ...
	I1002 19:57:20.954035  916052 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-460513 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-460513 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 19:57:20.954104  916052 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1002 19:57:21.009990  916052 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1002 19:57:21.010015  916052 cni.go:84] Creating CNI manager for ""
	I1002 19:57:21.010037  916052 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1002 19:57:21.010045  916052 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 19:57:21.010068  916052 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-460513 NodeName:functional-460513 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:
map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 19:57:21.010199  916052 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-460513"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 19:57:21.010271  916052 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 19:57:21.018636  916052 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 19:57:21.018697  916052 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 19:57:21.026393  916052 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1002 19:57:21.040543  916052 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 19:57:21.053610  916052 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2068 bytes)
	I1002 19:57:21.066875  916052 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 19:57:21.070586  916052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 19:57:21.194457  916052 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 19:57:21.207897  916052 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/functional-460513 for IP: 192.168.49.2
	I1002 19:57:21.207908  916052 certs.go:195] generating shared ca certs ...
	I1002 19:57:21.207923  916052 certs.go:227] acquiring lock for ca certs: {Name:mk8d4e351e81262a2dea8d7403e1df60e121408f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 19:57:21.208078  916052 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-881023/.minikube/ca.key
	I1002 19:57:21.208124  916052 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-881023/.minikube/proxy-client-ca.key
	I1002 19:57:21.208131  916052 certs.go:257] generating profile certs ...
	I1002 19:57:21.208212  916052 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/functional-460513/client.key
	I1002 19:57:21.208255  916052 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/functional-460513/apiserver.key.323dabdf
	I1002 19:57:21.208289  916052 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/functional-460513/proxy-client.key
	I1002 19:57:21.208401  916052 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-881023/.minikube/certs/882884.pem (1338 bytes)
	W1002 19:57:21.208427  916052 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-881023/.minikube/certs/882884_empty.pem, impossibly tiny 0 bytes
	I1002 19:57:21.208434  916052 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-881023/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 19:57:21.208456  916052 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-881023/.minikube/certs/ca.pem (1078 bytes)
	I1002 19:57:21.208476  916052 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-881023/.minikube/certs/cert.pem (1123 bytes)
	I1002 19:57:21.208499  916052 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-881023/.minikube/certs/key.pem (1675 bytes)
	I1002 19:57:21.208539  916052 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-881023/.minikube/files/etc/ssl/certs/8828842.pem (1708 bytes)
	I1002 19:57:21.209132  916052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-881023/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 19:57:21.229021  916052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-881023/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 19:57:21.247707  916052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-881023/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 19:57:21.266163  916052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-881023/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 19:57:21.284542  916052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/functional-460513/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 19:57:21.302904  916052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/functional-460513/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 19:57:21.320596  916052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/functional-460513/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 19:57:21.339003  916052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/functional-460513/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 19:57:21.356332  916052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-881023/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 19:57:21.373604  916052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-881023/.minikube/certs/882884.pem --> /usr/share/ca-certificates/882884.pem (1338 bytes)
	I1002 19:57:21.391343  916052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-881023/.minikube/files/etc/ssl/certs/8828842.pem --> /usr/share/ca-certificates/8828842.pem (1708 bytes)
	I1002 19:57:21.409748  916052 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 19:57:21.423179  916052 ssh_runner.go:195] Run: openssl version
	I1002 19:57:21.429874  916052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 19:57:21.438833  916052 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 19:57:21.443683  916052 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 19:48 /usr/share/ca-certificates/minikubeCA.pem
	I1002 19:57:21.443743  916052 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 19:57:21.485942  916052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 19:57:21.494107  916052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/882884.pem && ln -fs /usr/share/ca-certificates/882884.pem /etc/ssl/certs/882884.pem"
	I1002 19:57:21.502433  916052 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/882884.pem
	I1002 19:57:21.506367  916052 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 19:54 /usr/share/ca-certificates/882884.pem
	I1002 19:57:21.506427  916052 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/882884.pem
	I1002 19:57:21.547745  916052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/882884.pem /etc/ssl/certs/51391683.0"
	I1002 19:57:21.555870  916052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8828842.pem && ln -fs /usr/share/ca-certificates/8828842.pem /etc/ssl/certs/8828842.pem"
	I1002 19:57:21.564255  916052 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8828842.pem
	I1002 19:57:21.567967  916052 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 19:54 /usr/share/ca-certificates/8828842.pem
	I1002 19:57:21.568038  916052 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8828842.pem
	I1002 19:57:21.609384  916052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8828842.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 19:57:21.617752  916052 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 19:57:21.621662  916052 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 19:57:21.663057  916052 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 19:57:21.704251  916052 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 19:57:21.745408  916052 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 19:57:21.787171  916052 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 19:57:21.828943  916052 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 19:57:21.871143  916052 kubeadm.go:400] StartCluster: {Name:functional-460513 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-460513 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 19:57:21.871327  916052 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1002 19:57:21.889849  916052 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 19:57:21.897839  916052 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 19:57:21.897861  916052 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 19:57:21.897925  916052 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 19:57:21.905623  916052 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 19:57:21.906151  916052 kubeconfig.go:125] found "functional-460513" server: "https://192.168.49.2:8441"
	I1002 19:57:21.907409  916052 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 19:57:21.915470  916052 kubeadm.go:644] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-10-02 19:54:45.719399576 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-10-02 19:57:21.064854999 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1002 19:57:21.916062  916052 kubeadm.go:1160] stopping kube-system containers ...
	I1002 19:57:21.916132  916052 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1002 19:57:21.951612  916052 docker.go:484] Stopping containers: [3b07bcd4bc1c 11843acc93b8 5e610b6f5c95 8013cb97c756 5459180499bc 4805d040cabc cccbefc54d3c 14d19245da30 d6bece758620 c216e22a818d 6fa6f9b610fc c60049a22e15 b7d3e4afda29 b61a48faa350 d194472b4bf8 6867bc4873bc 0b3cf02b86e8 ad2ef11072e0 b285b18b89de e1dc10703a0c b6ee35384076 20724a7ef1a3 8072c3662037 bf25eae9bbc2 f788e75d8dee 00d45b54f98d 818052c1c08f bf2907225e1a]
	I1002 19:57:21.951694  916052 ssh_runner.go:195] Run: docker stop 3b07bcd4bc1c 11843acc93b8 5e610b6f5c95 8013cb97c756 5459180499bc 4805d040cabc cccbefc54d3c 14d19245da30 d6bece758620 c216e22a818d 6fa6f9b610fc c60049a22e15 b7d3e4afda29 b61a48faa350 d194472b4bf8 6867bc4873bc 0b3cf02b86e8 ad2ef11072e0 b285b18b89de e1dc10703a0c b6ee35384076 20724a7ef1a3 8072c3662037 bf25eae9bbc2 f788e75d8dee 00d45b54f98d 818052c1c08f bf2907225e1a
	I1002 19:57:21.978611  916052 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1002 19:57:22.097786  916052 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 19:57:22.106238  916052 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5635 Oct  2 19:54 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Oct  2 19:54 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1972 Oct  2 19:55 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Oct  2 19:54 /etc/kubernetes/scheduler.conf
	
	I1002 19:57:22.106300  916052 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1002 19:57:22.114917  916052 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1002 19:57:22.122813  916052 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1002 19:57:22.122871  916052 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 19:57:22.130373  916052 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1002 19:57:22.138390  916052 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1002 19:57:22.138450  916052 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 19:57:22.146124  916052 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1002 19:57:22.154249  916052 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1002 19:57:22.154303  916052 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 19:57:22.162365  916052 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 19:57:22.170405  916052 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 19:57:22.218615  916052 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 19:57:25.659891  916052 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (3.441251725s)
	I1002 19:57:25.659949  916052 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1002 19:57:25.886420  916052 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 19:57:25.946021  916052 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1002 19:57:26.046592  916052 api_server.go:52] waiting for apiserver process to appear ...
	I1002 19:57:26.046657  916052 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 19:57:26.546823  916052 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 19:57:27.047094  916052 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 19:57:27.547732  916052 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 19:57:27.589854  916052 api_server.go:72] duration metric: took 1.543274062s to wait for apiserver process to appear ...
	I1002 19:57:27.589868  916052 api_server.go:88] waiting for apiserver healthz status ...
	I1002 19:57:27.589885  916052 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1002 19:57:31.753262  916052 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 19:57:31.753279  916052 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 19:57:31.753295  916052 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1002 19:57:31.793943  916052 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 19:57:31.793957  916052 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 19:57:32.090337  916052 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1002 19:57:32.108978  916052 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1002 19:57:32.108995  916052 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1002 19:57:32.590330  916052 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1002 19:57:32.605209  916052 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1002 19:57:32.605225  916052 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1002 19:57:33.090888  916052 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1002 19:57:33.107580  916052 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1002 19:57:33.107598  916052 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1002 19:57:33.590003  916052 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1002 19:57:33.598674  916052 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1002 19:57:33.614383  916052 api_server.go:141] control plane version: v1.34.1
	I1002 19:57:33.614405  916052 api_server.go:131] duration metric: took 6.024531606s to wait for apiserver health ...
	I1002 19:57:33.614412  916052 cni.go:84] Creating CNI manager for ""
	I1002 19:57:33.614423  916052 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1002 19:57:33.617895  916052 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1002 19:57:33.621799  916052 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1002 19:57:33.632589  916052 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1002 19:57:33.649519  916052 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 19:57:33.656345  916052 system_pods.go:59] 7 kube-system pods found
	I1002 19:57:33.656372  916052 system_pods.go:61] "coredns-66bc5c9577-bb2ds" [bdb4fde2-1aa4-4469-9305-1f834deb7ff8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 19:57:33.656380  916052 system_pods.go:61] "etcd-functional-460513" [f75d720e-a504-4bfd-8064-f8329be4327c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 19:57:33.656389  916052 system_pods.go:61] "kube-apiserver-functional-460513" [fdb94b61-e4dd-4b3e-b049-a1295ba98edc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 19:57:33.656395  916052 system_pods.go:61] "kube-controller-manager-functional-460513" [72f69cd0-764b-4547-b465-9baa94361de4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 19:57:33.656400  916052 system_pods.go:61] "kube-proxy-z7ghw" [06232d19-d9a5-4182-8cc3-cee16711fdd0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1002 19:57:33.656406  916052 system_pods.go:61] "kube-scheduler-functional-460513" [ea9bdc32-dab6-45d9-b411-f651b3fe6792] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 19:57:33.656418  916052 system_pods.go:61] "storage-provisioner" [d5b35c47-b886-46a7-8a67-e8931dcb50e9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 19:57:33.656430  916052 system_pods.go:74] duration metric: took 6.895385ms to wait for pod list to return data ...
	I1002 19:57:33.656438  916052 node_conditions.go:102] verifying NodePressure condition ...
	I1002 19:57:33.660091  916052 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 19:57:33.660112  916052 node_conditions.go:123] node cpu capacity is 2
	I1002 19:57:33.660122  916052 node_conditions.go:105] duration metric: took 3.680273ms to run NodePressure ...
	I1002 19:57:33.660193  916052 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 19:57:33.918886  916052 kubeadm.go:728] waiting for restarted kubelet to initialise ...
	I1002 19:57:33.927198  916052 kubeadm.go:743] kubelet initialised
	I1002 19:57:33.927209  916052 kubeadm.go:744] duration metric: took 8.310069ms waiting for restarted kubelet to initialise ...
	I1002 19:57:33.927223  916052 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 19:57:33.943942  916052 ops.go:34] apiserver oom_adj: -16
	I1002 19:57:33.943953  916052 kubeadm.go:601] duration metric: took 12.046087277s to restartPrimaryControlPlane
	I1002 19:57:33.943961  916052 kubeadm.go:402] duration metric: took 12.072833528s to StartCluster
	I1002 19:57:33.944001  916052 settings.go:142] acquiring lock: {Name:mkac64c41b7df7147c80d0babd35e7ac38a28788 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 19:57:33.944090  916052 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-881023/kubeconfig
	I1002 19:57:33.944801  916052 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-881023/kubeconfig: {Name:mkf664a3dda5a2a163069a10b087ab5cefd54246 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 19:57:33.945277  916052 config.go:182] Loaded profile config "functional-460513": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1002 19:57:33.945328  916052 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1002 19:57:33.945373  916052 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 19:57:33.945617  916052 addons.go:69] Setting storage-provisioner=true in profile "functional-460513"
	I1002 19:57:33.945642  916052 addons.go:238] Setting addon storage-provisioner=true in "functional-460513"
	W1002 19:57:33.945648  916052 addons.go:247] addon storage-provisioner should already be in state true
	I1002 19:57:33.945671  916052 host.go:66] Checking if "functional-460513" exists ...
	I1002 19:57:33.945677  916052 addons.go:69] Setting default-storageclass=true in profile "functional-460513"
	I1002 19:57:33.945690  916052 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-460513"
	I1002 19:57:33.945979  916052 cli_runner.go:164] Run: docker container inspect functional-460513 --format={{.State.Status}}
	I1002 19:57:33.946101  916052 cli_runner.go:164] Run: docker container inspect functional-460513 --format={{.State.Status}}
	I1002 19:57:33.951055  916052 out.go:179] * Verifying Kubernetes components...
	I1002 19:57:33.954208  916052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 19:57:34.001674  916052 addons.go:238] Setting addon default-storageclass=true in "functional-460513"
	W1002 19:57:34.001687  916052 addons.go:247] addon default-storageclass should already be in state true
	I1002 19:57:34.001713  916052 host.go:66] Checking if "functional-460513" exists ...
	I1002 19:57:34.002140  916052 cli_runner.go:164] Run: docker container inspect functional-460513 --format={{.State.Status}}
	I1002 19:57:34.004156  916052 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 19:57:34.007313  916052 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 19:57:34.007326  916052 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 19:57:34.007400  916052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-460513
	I1002 19:57:34.037362  916052 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 19:57:34.037375  916052 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 19:57:34.037433  916052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-460513
	I1002 19:57:34.037628  916052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33896 SSHKeyPath:/home/jenkins/minikube-integration/21683-881023/.minikube/machines/functional-460513/id_rsa Username:docker}
	I1002 19:57:34.066778  916052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33896 SSHKeyPath:/home/jenkins/minikube-integration/21683-881023/.minikube/machines/functional-460513/id_rsa Username:docker}
	I1002 19:57:34.242902  916052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 19:57:34.272147  916052 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 19:57:34.275842  916052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 19:57:35.146371  916052 node_ready.go:35] waiting up to 6m0s for node "functional-460513" to be "Ready" ...
	I1002 19:57:35.158464  916052 node_ready.go:49] node "functional-460513" is "Ready"
	I1002 19:57:35.158482  916052 node_ready.go:38] duration metric: took 12.092736ms for node "functional-460513" to be "Ready" ...
	I1002 19:57:35.158498  916052 api_server.go:52] waiting for apiserver process to appear ...
	I1002 19:57:35.158573  916052 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 19:57:35.176741  916052 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1002 19:57:35.179540  916052 addons.go:514] duration metric: took 1.23414396s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1002 19:57:35.183749  916052 api_server.go:72] duration metric: took 1.238397876s to wait for apiserver process to appear ...
	I1002 19:57:35.183764  916052 api_server.go:88] waiting for apiserver healthz status ...
	I1002 19:57:35.183792  916052 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1002 19:57:35.193637  916052 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1002 19:57:35.194833  916052 api_server.go:141] control plane version: v1.34.1
	I1002 19:57:35.194847  916052 api_server.go:131] duration metric: took 11.078122ms to wait for apiserver health ...
	I1002 19:57:35.194855  916052 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 19:57:35.198698  916052 system_pods.go:59] 7 kube-system pods found
	I1002 19:57:35.198718  916052 system_pods.go:61] "coredns-66bc5c9577-bb2ds" [bdb4fde2-1aa4-4469-9305-1f834deb7ff8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 19:57:35.198725  916052 system_pods.go:61] "etcd-functional-460513" [f75d720e-a504-4bfd-8064-f8329be4327c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 19:57:35.198732  916052 system_pods.go:61] "kube-apiserver-functional-460513" [fdb94b61-e4dd-4b3e-b049-a1295ba98edc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 19:57:35.198738  916052 system_pods.go:61] "kube-controller-manager-functional-460513" [72f69cd0-764b-4547-b465-9baa94361de4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 19:57:35.198742  916052 system_pods.go:61] "kube-proxy-z7ghw" [06232d19-d9a5-4182-8cc3-cee16711fdd0] Running
	I1002 19:57:35.198747  916052 system_pods.go:61] "kube-scheduler-functional-460513" [ea9bdc32-dab6-45d9-b411-f651b3fe6792] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 19:57:35.198750  916052 system_pods.go:61] "storage-provisioner" [d5b35c47-b886-46a7-8a67-e8931dcb50e9] Running
	I1002 19:57:35.198755  916052 system_pods.go:74] duration metric: took 3.895955ms to wait for pod list to return data ...
	I1002 19:57:35.198762  916052 default_sa.go:34] waiting for default service account to be created ...
	I1002 19:57:35.202440  916052 default_sa.go:45] found service account: "default"
	I1002 19:57:35.202453  916052 default_sa.go:55] duration metric: took 3.686961ms for default service account to be created ...
	I1002 19:57:35.202461  916052 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 19:57:35.205491  916052 system_pods.go:86] 7 kube-system pods found
	I1002 19:57:35.205509  916052 system_pods.go:89] "coredns-66bc5c9577-bb2ds" [bdb4fde2-1aa4-4469-9305-1f834deb7ff8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 19:57:35.205516  916052 system_pods.go:89] "etcd-functional-460513" [f75d720e-a504-4bfd-8064-f8329be4327c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 19:57:35.205524  916052 system_pods.go:89] "kube-apiserver-functional-460513" [fdb94b61-e4dd-4b3e-b049-a1295ba98edc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 19:57:35.205529  916052 system_pods.go:89] "kube-controller-manager-functional-460513" [72f69cd0-764b-4547-b465-9baa94361de4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 19:57:35.205533  916052 system_pods.go:89] "kube-proxy-z7ghw" [06232d19-d9a5-4182-8cc3-cee16711fdd0] Running
	I1002 19:57:35.205538  916052 system_pods.go:89] "kube-scheduler-functional-460513" [ea9bdc32-dab6-45d9-b411-f651b3fe6792] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 19:57:35.205543  916052 system_pods.go:89] "storage-provisioner" [d5b35c47-b886-46a7-8a67-e8931dcb50e9] Running
	I1002 19:57:35.205548  916052 system_pods.go:126] duration metric: took 3.083558ms to wait for k8s-apps to be running ...
	I1002 19:57:35.205556  916052 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 19:57:35.205612  916052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 19:57:35.221525  916052 system_svc.go:56] duration metric: took 15.959021ms WaitForService to wait for kubelet
	I1002 19:57:35.221542  916052 kubeadm.go:586] duration metric: took 1.276196263s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 19:57:35.221559  916052 node_conditions.go:102] verifying NodePressure condition ...
	I1002 19:57:35.232548  916052 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 19:57:35.232564  916052 node_conditions.go:123] node cpu capacity is 2
	I1002 19:57:35.232575  916052 node_conditions.go:105] duration metric: took 11.01106ms to run NodePressure ...
	I1002 19:57:35.232586  916052 start.go:242] waiting for startup goroutines ...
	I1002 19:57:35.232592  916052 start.go:247] waiting for cluster config update ...
	I1002 19:57:35.232602  916052 start.go:256] writing updated cluster config ...
	I1002 19:57:35.232921  916052 ssh_runner.go:195] Run: rm -f paused
	I1002 19:57:35.236989  916052 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 19:57:35.298150  916052 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-bb2ds" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 19:57:37.303478  916052 pod_ready.go:94] pod "coredns-66bc5c9577-bb2ds" is "Ready"
	I1002 19:57:37.303492  916052 pod_ready.go:86] duration metric: took 2.005326603s for pod "coredns-66bc5c9577-bb2ds" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 19:57:37.306313  916052 pod_ready.go:83] waiting for pod "etcd-functional-460513" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 19:57:38.812798  916052 pod_ready.go:94] pod "etcd-functional-460513" is "Ready"
	I1002 19:57:38.812812  916052 pod_ready.go:86] duration metric: took 1.50648697s for pod "etcd-functional-460513" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 19:57:38.815756  916052 pod_ready.go:83] waiting for pod "kube-apiserver-functional-460513" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 19:57:39.821905  916052 pod_ready.go:94] pod "kube-apiserver-functional-460513" is "Ready"
	I1002 19:57:39.821919  916052 pod_ready.go:86] duration metric: took 1.006150794s for pod "kube-apiserver-functional-460513" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 19:57:39.824446  916052 pod_ready.go:83] waiting for pod "kube-controller-manager-functional-460513" in "kube-system" namespace to be "Ready" or be gone ...
	W1002 19:57:41.829719  916052 pod_ready.go:104] pod "kube-controller-manager-functional-460513" is not "Ready", error: <nil>
	I1002 19:57:43.830373  916052 pod_ready.go:94] pod "kube-controller-manager-functional-460513" is "Ready"
	I1002 19:57:43.830387  916052 pod_ready.go:86] duration metric: took 4.005928031s for pod "kube-controller-manager-functional-460513" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 19:57:43.832749  916052 pod_ready.go:83] waiting for pod "kube-proxy-z7ghw" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 19:57:43.836888  916052 pod_ready.go:94] pod "kube-proxy-z7ghw" is "Ready"
	I1002 19:57:43.836901  916052 pod_ready.go:86] duration metric: took 4.13947ms for pod "kube-proxy-z7ghw" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 19:57:43.839025  916052 pod_ready.go:83] waiting for pod "kube-scheduler-functional-460513" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 19:57:43.902322  916052 pod_ready.go:94] pod "kube-scheduler-functional-460513" is "Ready"
	I1002 19:57:43.902337  916052 pod_ready.go:86] duration metric: took 63.300304ms for pod "kube-scheduler-functional-460513" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 19:57:43.902348  916052 pod_ready.go:40] duration metric: took 8.665336389s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 19:57:43.959849  916052 start.go:627] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1002 19:57:43.962845  916052 out.go:179] * Done! kubectl is now configured to use "functional-460513" cluster and "default" namespace by default
	
	
	==> Docker <==
	Oct 02 19:58:00 functional-460513 dockerd[6691]: time="2025-10-02T19:58:00.835667159Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 19:58:03 functional-460513 cri-dockerd[7470]: time="2025-10-02T19:58:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fba126debd01c89d3f6e6c837d5a21ba49c98f387b8be3c5ce7b33bc3d8ab693/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Oct 02 19:58:03 functional-460513 dockerd[6691]: time="2025-10-02T19:58:03.384527077Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 19:58:13 functional-460513 dockerd[6691]: time="2025-10-02T19:58:13.307124277Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 19:58:14 functional-460513 dockerd[6691]: time="2025-10-02T19:58:14.331789963Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 19:58:35 functional-460513 dockerd[6691]: time="2025-10-02T19:58:35.412865557Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 19:58:35 functional-460513 cri-dockerd[7470]: time="2025-10-02T19:58:35Z" level=info msg="Stop pulling image docker.io/nginx:latest: latest: Pulling from library/nginx"
	Oct 02 19:58:44 functional-460513 dockerd[6691]: time="2025-10-02T19:58:44.318848656Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 19:59:17 functional-460513 dockerd[6691]: time="2025-10-02T19:59:17.332076152Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 19:59:35 functional-460513 dockerd[6691]: time="2025-10-02T19:59:35.325995031Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 20:00:48 functional-460513 dockerd[6691]: time="2025-10-02T20:00:48.471468567Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 20:00:48 functional-460513 cri-dockerd[7470]: time="2025-10-02T20:00:48Z" level=info msg="Stop pulling image docker.io/nginx:latest: latest: Pulling from library/nginx"
	Oct 02 20:01:06 functional-460513 dockerd[6691]: time="2025-10-02T20:01:06.335849446Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 20:02:03 functional-460513 cri-dockerd[7470]: time="2025-10-02T20:02:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3d46442f6c79adf72e007f3e8accf7692205eb23569cca7c37f8d18feafe459f/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Oct 02 20:02:03 functional-460513 dockerd[6691]: time="2025-10-02T20:02:03.716581858Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 20:02:15 functional-460513 dockerd[6691]: time="2025-10-02T20:02:15.414893795Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 20:02:15 functional-460513 cri-dockerd[7470]: time="2025-10-02T20:02:15Z" level=info msg="Stop pulling image kicbase/echo-server:latest: latest: Pulling from kicbase/echo-server"
	Oct 02 20:02:40 functional-460513 dockerd[6691]: time="2025-10-02T20:02:40.316974010Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 20:03:29 functional-460513 dockerd[6691]: time="2025-10-02T20:03:29.332182717Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 20:03:41 functional-460513 dockerd[6691]: time="2025-10-02T20:03:41.300761118Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 20:03:50 functional-460513 dockerd[6691]: time="2025-10-02T20:03:50.330533513Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 20:04:55 functional-460513 dockerd[6691]: time="2025-10-02T20:04:55.414614081Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 20:04:55 functional-460513 cri-dockerd[7470]: time="2025-10-02T20:04:55Z" level=info msg="Stop pulling image kicbase/echo-server:latest: latest: Pulling from kicbase/echo-server"
	Oct 02 20:07:46 functional-460513 dockerd[6691]: time="2025-10-02T20:07:46.415057721Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 20:07:46 functional-460513 cri-dockerd[7470]: time="2025-10-02T20:07:46Z" level=info msg="Stop pulling image kicbase/echo-server:latest: latest: Pulling from kicbase/echo-server"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                           CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	bc256734b9fe8       nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8   10 minutes ago      Running             nginx                     0                   9789396a1419a       nginx-svc                                   default
	fe224927b52c5       05baa95f5142d                                                                   10 minutes ago      Running             kube-proxy                2                   3a14beb9805ac       kube-proxy-z7ghw                            kube-system
	33f4fa437242e       ba04bb24b9575                                                                   10 minutes ago      Running             storage-provisioner       2                   ed81eec2d77f8       storage-provisioner                         kube-system
	02510aec7c38d       138784d87c9c5                                                                   10 minutes ago      Running             coredns                   2                   afeaaf344747b       coredns-66bc5c9577-bb2ds                    kube-system
	26134bc61f5d9       a1894772a478e                                                                   10 minutes ago      Running             etcd                      2                   4cf8116593459       etcd-functional-460513                      kube-system
	db9b1101b76eb       43911e833d64d                                                                   10 minutes ago      Running             kube-apiserver            0                   29b883084068a       kube-apiserver-functional-460513            kube-system
	0e09036d7add9       b5f57ec6b9867                                                                   10 minutes ago      Running             kube-scheduler            2                   bfb48aab841d2       kube-scheduler-functional-460513            kube-system
	5d710be832df0       7eb2c6ff0c5a7                                                                   10 minutes ago      Running             kube-controller-manager   2                   605575f71f812       kube-controller-manager-functional-460513   kube-system
	11843acc93b83       138784d87c9c5                                                                   11 minutes ago      Exited              coredns                   1                   c216e22a818d3       coredns-66bc5c9577-bb2ds                    kube-system
	5e610b6f5c956       ba04bb24b9575                                                                   11 minutes ago      Exited              storage-provisioner       1                   d6bece758620b       storage-provisioner                         kube-system
	8013cb97c756c       05baa95f5142d                                                                   11 minutes ago      Exited              kube-proxy                1                   6fa6f9b610fc1       kube-proxy-z7ghw                            kube-system
	5459180499bcd       b5f57ec6b9867                                                                   11 minutes ago      Exited              kube-scheduler            1                   14d19245da307       kube-scheduler-functional-460513            kube-system
	4805d040cabcf       a1894772a478e                                                                   11 minutes ago      Exited              etcd                      1                   b61a48faa350c       etcd-functional-460513                      kube-system
	cccbefc54d3cd       7eb2c6ff0c5a7                                                                   11 minutes ago      Exited              kube-controller-manager   1                   b7d3e4afda29d       kube-controller-manager-functional-460513   kube-system
	
	
	==> coredns [02510aec7c38] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34079 - 21240 "HINFO IN 244857414700627593.4635503374353347991. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.021849786s
	
	
	==> coredns [11843acc93b8] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43425 - 16476 "HINFO IN 6420058890467486523.4324477465014152588. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020251933s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-460513
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-460513
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=77ec36ba275b9c51be897969b75e45bdcce52b4b
	                    minikube.k8s.io/name=functional-460513
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T19_55_07_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 19:55:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-460513
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 20:08:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 20:07:14 +0000   Thu, 02 Oct 2025 19:54:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 20:07:14 +0000   Thu, 02 Oct 2025 19:54:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 20:07:14 +0000   Thu, 02 Oct 2025 19:54:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 20:07:14 +0000   Thu, 02 Oct 2025 19:55:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-460513
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 383087b4c8744483b09343609d84322f
	  System UUID:                5b6ef310-3cb5-4b1c-978f-45f181f323cd
	  Boot ID:                    0abe58db-3afd-40ad-9a63-2ed98334b343
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-s8zx4                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m2s
	  default                     hello-node-connect-7d85dfc575-85j8h          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-bb2ds                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-functional-460513                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kube-apiserver-functional-460513             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-460513    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-z7ghw                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-460513             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (2%)  170Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Normal   NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node functional-460513 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node functional-460513 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node functional-460513 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     12m                kubelet          Node functional-460513 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node functional-460513 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node functional-460513 status is now: NodeHasNoDiskPressure
	  Normal   NodeReady                12m                kubelet          Node functional-460513 status is now: NodeReady
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           12m                node-controller  Node functional-460513 event: Registered Node functional-460513 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node functional-460513 event: Registered Node functional-460513 in Controller
	  Warning  ContainerGCFailed        10m (x2 over 11m)  kubelet          rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-460513 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-460513 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node functional-460513 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node functional-460513 event: Registered Node functional-460513 in Controller
	
	
	==> dmesg <==
	[Oct 2 18:16] kauditd_printk_skb: 8 callbacks suppressed
	[Oct 2 19:46] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [26134bc61f5d] <==
	{"level":"warn","ts":"2025-10-02T19:57:30.172696Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T19:57:30.203564Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T19:57:30.231449Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T19:57:30.255543Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T19:57:30.266504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T19:57:30.329264Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T19:57:30.337464Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T19:57:30.364171Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T19:57:30.390608Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T19:57:30.429996Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T19:57:30.451770Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T19:57:30.474863Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T19:57:30.502871Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T19:57:30.549653Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T19:57:30.571696Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T19:57:30.604458Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T19:57:30.632445Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T19:57:30.711817Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T19:57:30.749386Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T19:57:30.773166Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T19:57:30.809665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T19:57:30.895187Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44136","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-02T20:07:29.316623Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1154}
	{"level":"info","ts":"2025-10-02T20:07:29.339888Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1154,"took":"22.906662ms","hash":2845891145,"current-db-size-bytes":3248128,"current-db-size":"3.2 MB","current-db-size-in-use-bytes":1507328,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2025-10-02T20:07:29.339943Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2845891145,"revision":1154,"compact-revision":-1}
	
	
	==> etcd [4805d040cabc] <==
	{"level":"warn","ts":"2025-10-02T19:56:24.649466Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T19:56:24.669130Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T19:56:24.692938Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T19:56:24.722613Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T19:56:24.742087Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T19:56:24.758015Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T19:56:24.852099Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56828","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-02T19:57:09.748218Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-02T19:57:09.748293Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-460513","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-10-02T19:57:09.748479Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-02T19:57:16.751072Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-02T19:57:16.753295Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T19:57:16.753526Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-10-02T19:57:16.754984Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-02T19:57:16.755181Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-02T19:57:16.756496Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-02T19:57:16.756697Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-02T19:57:16.756785Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-02T19:57:16.756962Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-02T19:57:16.757051Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-02T19:57:16.757145Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T19:57:16.760135Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-02T19:57:16.760313Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T19:57:16.760384Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-02T19:57:16.760510Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-460513","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 20:08:04 up  4:50,  0 user,  load average: 0.13, 0.39, 1.25
	Linux functional-460513 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [db9b1101b76e] <==
	I1002 19:57:31.910420       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1002 19:57:31.920008       1 cache.go:39] Caches are synced for autoregister controller
	I1002 19:57:31.921093       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1002 19:57:31.921322       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1002 19:57:31.921533       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1002 19:57:31.921674       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1002 19:57:31.921966       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1002 19:57:31.927546       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1002 19:57:31.928922       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 19:57:31.930128       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1002 19:57:32.120722       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1002 19:57:32.628068       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1002 19:57:33.145257       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1002 19:57:33.146828       1 controller.go:667] quota admission added evaluator for: endpoints
	I1002 19:57:33.160804       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1002 19:57:33.783270       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1002 19:57:33.834297       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1002 19:57:33.873574       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 19:57:33.886358       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 19:57:35.224249       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1002 19:57:47.082177       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.105.248.125"}
	I1002 19:57:54.116376       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.111.89.238"}
	I1002 19:58:02.716762       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.98.203.206"}
	I1002 20:02:03.053370       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.106.214.198"}
	I1002 20:07:31.811789       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [5d710be832df] <==
	I1002 19:57:35.170845       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1002 19:57:35.170870       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1002 19:57:35.172214       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1002 19:57:35.172205       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 19:57:35.176943       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1002 19:57:35.178569       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 19:57:35.178753       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1002 19:57:35.178865       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1002 19:57:35.184345       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1002 19:57:35.186679       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1002 19:57:35.189823       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1002 19:57:35.193902       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1002 19:57:35.195049       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1002 19:57:35.199815       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1002 19:57:35.210359       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1002 19:57:35.210452       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1002 19:57:35.213251       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1002 19:57:35.213478       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1002 19:57:35.213609       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1002 19:57:35.216464       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1002 19:57:35.216526       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1002 19:57:35.216569       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1002 19:57:35.217285       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 19:57:35.219381       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1002 19:57:35.224560       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-controller-manager [cccbefc54d3c] <==
	I1002 19:56:29.302392       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1002 19:56:29.307085       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1002 19:56:29.310379       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1002 19:56:29.319631       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 19:56:29.322784       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1002 19:56:29.326064       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1002 19:56:29.328302       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1002 19:56:29.332298       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1002 19:56:29.332513       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1002 19:56:29.332362       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1002 19:56:29.332660       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1002 19:56:29.332336       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1002 19:56:29.333765       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1002 19:56:29.333841       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1002 19:56:29.337080       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1002 19:56:29.339430       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1002 19:56:29.339806       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1002 19:56:29.339958       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1002 19:56:29.340109       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1002 19:56:29.340280       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1002 19:56:29.340402       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1002 19:56:29.342935       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1002 19:56:29.345450       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1002 19:56:29.348097       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1002 19:56:29.368572       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	
	
	==> kube-proxy [8013cb97c756] <==
	I1002 19:56:26.293475       1 server_linux.go:53] "Using iptables proxy"
	I1002 19:56:26.542177       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 19:56:26.642813       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 19:56:26.642867       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1002 19:56:26.642965       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 19:56:26.881417       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 19:56:26.881477       1 server_linux.go:132] "Using iptables Proxier"
	I1002 19:56:26.953908       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 19:56:26.969663       1 server.go:527] "Version info" version="v1.34.1"
	I1002 19:56:26.969688       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 19:56:26.978549       1 config.go:200] "Starting service config controller"
	I1002 19:56:26.978578       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 19:56:27.018283       1 config.go:106] "Starting endpoint slice config controller"
	I1002 19:56:27.018304       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 19:56:27.018327       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 19:56:27.018332       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 19:56:27.018825       1 config.go:309] "Starting node config controller"
	I1002 19:56:27.018838       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 19:56:27.018845       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 19:56:27.079689       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 19:56:27.118988       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 19:56:27.119021       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [fe224927b52c] <==
	I1002 19:57:33.648761       1 server_linux.go:53] "Using iptables proxy"
	I1002 19:57:33.759095       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 19:57:33.860901       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 19:57:33.862375       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1002 19:57:33.862589       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 19:57:33.950063       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 19:57:33.953446       1 server_linux.go:132] "Using iptables Proxier"
	I1002 19:57:33.978599       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 19:57:33.978920       1 server.go:527] "Version info" version="v1.34.1"
	I1002 19:57:33.978939       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 19:57:33.981527       1 config.go:200] "Starting service config controller"
	I1002 19:57:33.981550       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 19:57:33.983742       1 config.go:106] "Starting endpoint slice config controller"
	I1002 19:57:33.983757       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 19:57:33.983780       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 19:57:33.983784       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 19:57:33.984516       1 config.go:309] "Starting node config controller"
	I1002 19:57:33.984523       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 19:57:33.984530       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 19:57:34.082480       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 19:57:34.085275       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 19:57:34.085312       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [0e09036d7add] <==
	I1002 19:57:31.457845       1 serving.go:386] Generated self-signed cert in-memory
	I1002 19:57:33.347221       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1002 19:57:33.347258       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 19:57:33.354010       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1002 19:57:33.354105       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1002 19:57:33.354127       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1002 19:57:33.356128       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1002 19:57:33.366203       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 19:57:33.366227       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 19:57:33.366246       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 19:57:33.366252       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 19:57:33.454928       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1002 19:57:33.467906       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 19:57:33.467992       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [5459180499bc] <==
	I1002 19:56:24.387641       1 serving.go:386] Generated self-signed cert in-memory
	I1002 19:56:26.440161       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1002 19:56:26.440199       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 19:56:26.454187       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1002 19:56:26.454290       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1002 19:56:26.455734       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1002 19:56:26.461260       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1002 19:56:26.461736       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 19:56:26.461761       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 19:56:26.461780       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 19:56:26.461791       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 19:56:26.562274       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 19:56:26.562343       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1002 19:56:26.562445       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 19:57:09.734861       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1002 19:57:09.734884       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1002 19:57:09.734919       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1002 19:57:09.734950       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 19:57:09.734971       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	I1002 19:57:09.735006       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 19:57:09.735269       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1002 19:57:09.735298       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 02 20:06:27 functional-460513 kubelet[7846]: E1002 20:06:27.100133    7846 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="27481bf8-0750-44a3-93cc-d73e2662010e"
	Oct 02 20:06:33 functional-460513 kubelet[7846]: E1002 20:06:33.099393    7846 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-85j8h" podUID="7456d5e9-502e-455b-9ac7-aed4d302fe22"
	Oct 02 20:06:37 functional-460513 kubelet[7846]: E1002 20:06:37.099474    7846 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-s8zx4" podUID="7076f721-7fec-48cb-b884-2ff8c9abbcd2"
	Oct 02 20:06:40 functional-460513 kubelet[7846]: E1002 20:06:40.100493    7846 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="27481bf8-0750-44a3-93cc-d73e2662010e"
	Oct 02 20:06:48 functional-460513 kubelet[7846]: E1002 20:06:48.100075    7846 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-85j8h" podUID="7456d5e9-502e-455b-9ac7-aed4d302fe22"
	Oct 02 20:06:50 functional-460513 kubelet[7846]: E1002 20:06:50.101040    7846 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-s8zx4" podUID="7076f721-7fec-48cb-b884-2ff8c9abbcd2"
	Oct 02 20:06:51 functional-460513 kubelet[7846]: E1002 20:06:51.099850    7846 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="27481bf8-0750-44a3-93cc-d73e2662010e"
	Oct 02 20:07:02 functional-460513 kubelet[7846]: E1002 20:07:02.099993    7846 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="27481bf8-0750-44a3-93cc-d73e2662010e"
	Oct 02 20:07:03 functional-460513 kubelet[7846]: E1002 20:07:03.099268    7846 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-s8zx4" podUID="7076f721-7fec-48cb-b884-2ff8c9abbcd2"
	Oct 02 20:07:03 functional-460513 kubelet[7846]: E1002 20:07:03.099293    7846 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-85j8h" podUID="7456d5e9-502e-455b-9ac7-aed4d302fe22"
	Oct 02 20:07:14 functional-460513 kubelet[7846]: E1002 20:07:14.107571    7846 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="27481bf8-0750-44a3-93cc-d73e2662010e"
	Oct 02 20:07:18 functional-460513 kubelet[7846]: E1002 20:07:18.100972    7846 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-s8zx4" podUID="7076f721-7fec-48cb-b884-2ff8c9abbcd2"
	Oct 02 20:07:18 functional-460513 kubelet[7846]: E1002 20:07:18.114155    7846 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-85j8h" podUID="7456d5e9-502e-455b-9ac7-aed4d302fe22"
	Oct 02 20:07:27 functional-460513 kubelet[7846]: E1002 20:07:27.100044    7846 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="27481bf8-0750-44a3-93cc-d73e2662010e"
	Oct 02 20:07:30 functional-460513 kubelet[7846]: E1002 20:07:30.100320    7846 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-85j8h" podUID="7456d5e9-502e-455b-9ac7-aed4d302fe22"
	Oct 02 20:07:31 functional-460513 kubelet[7846]: E1002 20:07:31.099419    7846 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-s8zx4" podUID="7076f721-7fec-48cb-b884-2ff8c9abbcd2"
	Oct 02 20:07:42 functional-460513 kubelet[7846]: E1002 20:07:42.099927    7846 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="27481bf8-0750-44a3-93cc-d73e2662010e"
	Oct 02 20:07:45 functional-460513 kubelet[7846]: E1002 20:07:45.100462    7846 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-85j8h" podUID="7456d5e9-502e-455b-9ac7-aed4d302fe22"
	Oct 02 20:07:46 functional-460513 kubelet[7846]: E1002 20:07:46.418871    7846 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Oct 02 20:07:46 functional-460513 kubelet[7846]: E1002 20:07:46.418941    7846 kuberuntime_image.go:43] "Failed to pull image" err="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Oct 02 20:07:46 functional-460513 kubelet[7846]: E1002 20:07:46.419039    7846 kuberuntime_manager.go:1449] "Unhandled Error" err="container echo-server start failed in pod hello-node-75c85bcc94-s8zx4_default(7076f721-7fec-48cb-b884-2ff8c9abbcd2): ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 02 20:07:46 functional-460513 kubelet[7846]: E1002 20:07:46.419084    7846 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-s8zx4" podUID="7076f721-7fec-48cb-b884-2ff8c9abbcd2"
	Oct 02 20:07:54 functional-460513 kubelet[7846]: E1002 20:07:54.099718    7846 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="27481bf8-0750-44a3-93cc-d73e2662010e"
	Oct 02 20:07:58 functional-460513 kubelet[7846]: E1002 20:07:58.099971    7846 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-85j8h" podUID="7456d5e9-502e-455b-9ac7-aed4d302fe22"
	Oct 02 20:08:01 functional-460513 kubelet[7846]: E1002 20:08:01.099328    7846 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-s8zx4" podUID="7076f721-7fec-48cb-b884-2ff8c9abbcd2"
	
	
	==> storage-provisioner [33f4fa437242] <==
	W1002 20:07:39.847243       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:07:41.850325       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:07:41.857089       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:07:43.860413       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:07:43.865494       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:07:45.869235       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:07:45.874433       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:07:47.878699       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:07:47.883908       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:07:49.886809       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:07:49.894062       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:07:51.897029       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:07:51.902122       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:07:53.905584       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:07:53.912648       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:07:55.915470       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:07:55.920080       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:07:57.924335       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:07:57.929254       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:07:59.932772       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:07:59.937965       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:08:01.940652       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:08:01.945895       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:08:03.949352       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:08:03.954065       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [5e610b6f5c95] <==
	W1002 19:56:45.555569       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:56:45.562256       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:56:47.582004       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:56:47.589875       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:56:49.592960       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:56:49.598092       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:56:51.600865       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:56:51.608664       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:56:53.612538       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:56:53.618914       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:56:55.622089       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:56:55.627118       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:56:57.630513       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:56:57.635573       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:56:59.638282       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:56:59.643254       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:57:01.646302       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:57:01.653892       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:57:03.657134       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:57:03.662085       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:57:05.665042       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:57:05.669486       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:57:07.672971       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:57:07.679989       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	E1002 19:57:09.680844       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-460513 -n functional-460513
helpers_test.go:269: (dbg) Run:  kubectl --context functional-460513 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-node-75c85bcc94-s8zx4 hello-node-connect-7d85dfc575-85j8h sp-pod
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-460513 describe pod hello-node-75c85bcc94-s8zx4 hello-node-connect-7d85dfc575-85j8h sp-pod
helpers_test.go:290: (dbg) kubectl --context functional-460513 describe pod hello-node-75c85bcc94-s8zx4 hello-node-connect-7d85dfc575-85j8h sp-pod:

                                                
                                                
-- stdout --
	Name:             hello-node-75c85bcc94-s8zx4
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-460513/192.168.49.2
	Start Time:       Thu, 02 Oct 2025 20:02:02 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:           10.244.0.10
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cg2d6 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-cg2d6:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  6m2s                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-s8zx4 to functional-460513
	  Warning  Failed     4m36s (x3 over 6m2s)   kubelet            Failed to pull image "kicbase/echo-server": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    3m10s (x5 over 6m2s)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     3m10s (x5 over 6m2s)   kubelet            Error: ErrImagePull
	  Warning  Failed     3m10s (x2 over 5m50s)  kubelet            Failed to pull image "kicbase/echo-server": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    47s (x22 over 6m1s)    kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     47s (x22 over 6m1s)    kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-85j8h
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-460513/192.168.49.2
	Start Time:       Thu, 02 Oct 2025 19:58:02 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:           10.244.0.9
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ps69t (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-ps69t:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-85j8h to functional-460513
	  Normal   Pulling    6m59s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m59s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     6m59s (x5 over 10m)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m49s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m49s (x21 over 10m)  kubelet            Error: ImagePullBackOff
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-460513/192.168.49.2
	Start Time:       Thu, 02 Oct 2025 19:57:59 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8qj7g (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-8qj7g:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/sp-pod to functional-460513
	  Warning  Failed     8m48s (x3 over 10m)    kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    7m17s (x5 over 10m)    kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     7m17s (x5 over 10m)    kubelet            Error: ErrImagePull
	  Warning  Failed     7m17s (x2 over 9m30s)  kubelet            Failed to pull image "docker.io/nginx": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     5m4s (x20 over 10m)    kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m51s (x21 over 10m)   kubelet            Back-off pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.29s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (248.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [d5b35c47-b886-46a7-8a67-e8931dcb50e9] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.021488637s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-460513 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-460513 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-460513 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-460513 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [27481bf8-0750-44a3-93cc-d73e2662010e] Pending
helpers_test.go:352: "sp-pod" [27481bf8-0750-44a3-93cc-d73e2662010e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "default" "test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:140: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 4m0s: context deadline exceeded ****
functional_test_pvc_test.go:140: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-460513 -n functional-460513
functional_test_pvc_test.go:140: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2025-10-02 20:02:00.409149294 +0000 UTC m=+894.823923883
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-460513 describe po sp-pod -n default
functional_test_pvc_test.go:140: (dbg) kubectl --context functional-460513 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-460513/192.168.49.2
Start Time:       Thu, 02 Oct 2025 19:57:59 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:  10.244.0.8
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8qj7g (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-8qj7g:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  4m                   default-scheduler  Successfully assigned default/sp-pod to functional-460513
Warning  Failed     2m43s (x3 over 4m)   kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    72s (x5 over 4m)     kubelet            Pulling image "docker.io/nginx"
Warning  Failed     72s (x5 over 4m)     kubelet            Error: ErrImagePull
Warning  Failed     72s (x2 over 3m25s)  kubelet            Failed to pull image "docker.io/nginx": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    7s (x15 over 3m59s)  kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     7s (x15 over 3m59s)  kubelet            Error: ImagePullBackOff
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-460513 logs sp-pod -n default
functional_test_pvc_test.go:140: (dbg) Non-zero exit: kubectl --context functional-460513 logs sp-pod -n default: exit status 1 (121.034905ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_pvc_test.go:140: kubectl --context functional-460513 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:141: failed waiting for pvctest pod : test=storage-provisioner within 4m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-460513
helpers_test.go:243: (dbg) docker inspect functional-460513:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b8078c0512be5e879b5134e27e6af3103f35cf57b611aaf29468582bfa479b7e",
	        "Created": "2025-10-02T19:54:34.194287273Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 908898,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T19:54:34.236525194Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/b8078c0512be5e879b5134e27e6af3103f35cf57b611aaf29468582bfa479b7e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b8078c0512be5e879b5134e27e6af3103f35cf57b611aaf29468582bfa479b7e/hostname",
	        "HostsPath": "/var/lib/docker/containers/b8078c0512be5e879b5134e27e6af3103f35cf57b611aaf29468582bfa479b7e/hosts",
	        "LogPath": "/var/lib/docker/containers/b8078c0512be5e879b5134e27e6af3103f35cf57b611aaf29468582bfa479b7e/b8078c0512be5e879b5134e27e6af3103f35cf57b611aaf29468582bfa479b7e-json.log",
	        "Name": "/functional-460513",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-460513:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-460513",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b8078c0512be5e879b5134e27e6af3103f35cf57b611aaf29468582bfa479b7e",
	                "LowerDir": "/var/lib/docker/overlay2/cb2c9b449a3d89e392b79bf1325d8b59cc262f54a697e258214e3f921a516b36-init/diff:/var/lib/docker/overlay2/4168a6b35c0191bd222903a9b469ebe18ea5b9d5b6daa344f4a494c07b59f9f7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cb2c9b449a3d89e392b79bf1325d8b59cc262f54a697e258214e3f921a516b36/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cb2c9b449a3d89e392b79bf1325d8b59cc262f54a697e258214e3f921a516b36/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cb2c9b449a3d89e392b79bf1325d8b59cc262f54a697e258214e3f921a516b36/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-460513",
	                "Source": "/var/lib/docker/volumes/functional-460513/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-460513",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-460513",
	                "name.minikube.sigs.k8s.io": "functional-460513",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "bee011508c270ebc2e408f73210ac3ca6232133e06ba77fc00469a23ae840d07",
	            "SandboxKey": "/var/run/docker/netns/bee011508c27",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33896"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33897"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33900"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33898"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33899"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-460513": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d6:74:65:19:66:d7",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "46436a08b18539b6074e0247d0c1aef98e52bada9514c01c857330a2e439d034",
	                    "EndpointID": "dc8104321418323876b9e2a21a7a9e8d25ae8fe4b72705ceac33234352c25405",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-460513",
	                        "b8078c0512be"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-460513 -n functional-460513
helpers_test.go:252: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-460513 logs -n 25: (1.234737757s)
helpers_test.go:260: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                            ARGS                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-460513 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                    │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 19:56 UTC │ 02 Oct 25 19:56 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                           │ minikube          │ jenkins │ v1.37.0 │ 02 Oct 25 19:56 UTC │ 02 Oct 25 19:56 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 02 Oct 25 19:56 UTC │ 02 Oct 25 19:56 UTC │
	│ kubectl │ functional-460513 kubectl -- --context functional-460513 get pods                                                          │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 19:56 UTC │ 02 Oct 25 19:56 UTC │
	│ start   │ -p functional-460513 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                   │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 19:56 UTC │ 02 Oct 25 19:57 UTC │
	│ service │ invalid-svc -p functional-460513                                                                                           │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 19:57 UTC │                     │
	│ config  │ functional-460513 config unset cpus                                                                                        │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 19:57 UTC │ 02 Oct 25 19:57 UTC │
	│ cp      │ functional-460513 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                         │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 19:57 UTC │ 02 Oct 25 19:57 UTC │
	│ config  │ functional-460513 config get cpus                                                                                          │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 19:57 UTC │                     │
	│ config  │ functional-460513 config set cpus 2                                                                                        │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 19:57 UTC │ 02 Oct 25 19:57 UTC │
	│ config  │ functional-460513 config get cpus                                                                                          │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 19:57 UTC │ 02 Oct 25 19:57 UTC │
	│ config  │ functional-460513 config unset cpus                                                                                        │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 19:57 UTC │ 02 Oct 25 19:57 UTC │
	│ ssh     │ functional-460513 ssh -n functional-460513 sudo cat /home/docker/cp-test.txt                                               │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 19:57 UTC │ 02 Oct 25 19:57 UTC │
	│ config  │ functional-460513 config get cpus                                                                                          │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 19:57 UTC │                     │
	│ ssh     │ functional-460513 ssh echo hello                                                                                           │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 19:57 UTC │ 02 Oct 25 19:57 UTC │
	│ cp      │ functional-460513 cp functional-460513:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd4024348198/001/cp-test.txt │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 19:57 UTC │ 02 Oct 25 19:57 UTC │
	│ ssh     │ functional-460513 ssh cat /etc/hostname                                                                                    │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 19:57 UTC │ 02 Oct 25 19:57 UTC │
	│ ssh     │ functional-460513 ssh -n functional-460513 sudo cat /home/docker/cp-test.txt                                               │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 19:57 UTC │ 02 Oct 25 19:57 UTC │
	│ tunnel  │ functional-460513 tunnel --alsologtostderr                                                                                 │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 19:57 UTC │                     │
	│ tunnel  │ functional-460513 tunnel --alsologtostderr                                                                                 │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 19:57 UTC │                     │
	│ cp      │ functional-460513 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                  │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 19:57 UTC │ 02 Oct 25 19:57 UTC │
	│ tunnel  │ functional-460513 tunnel --alsologtostderr                                                                                 │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 19:57 UTC │                     │
	│ ssh     │ functional-460513 ssh -n functional-460513 sudo cat /tmp/does/not/exist/cp-test.txt                                        │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 19:57 UTC │ 02 Oct 25 19:57 UTC │
	│ addons  │ functional-460513 addons list                                                                                              │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 19:58 UTC │ 02 Oct 25 19:58 UTC │
	│ addons  │ functional-460513 addons list -o json                                                                                      │ functional-460513 │ jenkins │ v1.37.0 │ 02 Oct 25 19:58 UTC │ 02 Oct 25 19:58 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 19:56:44
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 19:56:44.872327  916052 out.go:360] Setting OutFile to fd 1 ...
	I1002 19:56:44.872466  916052 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 19:56:44.872469  916052 out.go:374] Setting ErrFile to fd 2...
	I1002 19:56:44.872472  916052 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 19:56:44.872744  916052 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-881023/.minikube/bin
	I1002 19:56:44.873121  916052 out.go:368] Setting JSON to false
	I1002 19:56:44.874123  916052 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":16743,"bootTime":1759418262,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1002 19:56:44.874184  916052 start.go:140] virtualization:  
	I1002 19:56:44.877511  916052 out.go:179] * [functional-460513] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 19:56:44.881444  916052 out.go:179]   - MINIKUBE_LOCATION=21683
	I1002 19:56:44.881596  916052 notify.go:221] Checking for updates...
	I1002 19:56:44.887366  916052 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 19:56:44.890262  916052 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-881023/kubeconfig
	I1002 19:56:44.893120  916052 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-881023/.minikube
	I1002 19:56:44.896075  916052 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 19:56:44.899128  916052 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 19:56:44.902531  916052 config.go:182] Loaded profile config "functional-460513": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1002 19:56:44.902635  916052 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 19:56:44.934419  916052 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1002 19:56:44.934528  916052 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 19:56:45.019377  916052 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-10-02 19:56:44.98857067 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 19:56:45.019523  916052 docker.go:319] overlay module found
	I1002 19:56:45.047367  916052 out.go:179] * Using the docker driver based on existing profile
	I1002 19:56:45.059739  916052 start.go:306] selected driver: docker
	I1002 19:56:45.059753  916052 start.go:936] validating driver "docker" against &{Name:functional-460513 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-460513 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 19:56:45.059874  916052 start.go:947] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 19:56:45.059994  916052 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 19:56:45.178283  916052 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-10-02 19:56:45.163360572 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 19:56:45.179029  916052 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 19:56:45.179068  916052 cni.go:84] Creating CNI manager for ""
	I1002 19:56:45.179137  916052 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1002 19:56:45.179193  916052 start.go:350] cluster config:
	{Name:functional-460513 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-460513 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 19:56:45.182494  916052 out.go:179] * Starting "functional-460513" primary control-plane node in "functional-460513" cluster
	I1002 19:56:45.185622  916052 cache.go:124] Beginning downloading kic base image for docker with docker
	I1002 19:56:45.191496  916052 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 19:56:45.197143  916052 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1002 19:56:45.197242  916052 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-881023/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-arm64.tar.lz4
	I1002 19:56:45.197253  916052 cache.go:59] Caching tarball of preloaded images
	I1002 19:56:45.197254  916052 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 19:56:45.197400  916052 preload.go:233] Found /home/jenkins/minikube-integration/21683-881023/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1002 19:56:45.197410  916052 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on docker
	I1002 19:56:45.197535  916052 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/functional-460513/config.json ...
	I1002 19:56:45.220607  916052 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 19:56:45.220620  916052 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 19:56:45.220632  916052 cache.go:233] Successfully downloaded all kic artifacts
	I1002 19:56:45.220662  916052 start.go:361] acquireMachinesLock for functional-460513: {Name:mk2abc62e3e0e90e6ec072747a3bff43c0103c14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 19:56:45.220728  916052 start.go:365] duration metric: took 44.439µs to acquireMachinesLock for "functional-460513"
	I1002 19:56:45.220749  916052 start.go:97] Skipping create...Using existing machine configuration
	I1002 19:56:45.220754  916052 fix.go:55] fixHost starting: 
	I1002 19:56:45.221039  916052 cli_runner.go:164] Run: docker container inspect functional-460513 --format={{.State.Status}}
	I1002 19:56:45.253214  916052 fix.go:113] recreateIfNeeded on functional-460513: state=Running err=<nil>
	W1002 19:56:45.253260  916052 fix.go:139] unexpected machine state, will restart: <nil>
	I1002 19:56:45.257346  916052 out.go:252] * Updating the running docker "functional-460513" container ...
	I1002 19:56:45.257388  916052 machine.go:93] provisionDockerMachine start ...
	I1002 19:56:45.257511  916052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-460513
	I1002 19:56:45.289314  916052 main.go:141] libmachine: Using SSH client type: native
	I1002 19:56:45.289671  916052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33896 <nil> <nil>}
	I1002 19:56:45.289679  916052 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 19:56:45.485835  916052 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-460513
	
	I1002 19:56:45.485863  916052 ubuntu.go:182] provisioning hostname "functional-460513"
	I1002 19:56:45.485934  916052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-460513
	I1002 19:56:45.506368  916052 main.go:141] libmachine: Using SSH client type: native
	I1002 19:56:45.506662  916052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33896 <nil> <nil>}
	I1002 19:56:45.506671  916052 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-460513 && echo "functional-460513" | sudo tee /etc/hostname
	I1002 19:56:45.661345  916052 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-460513
	
	I1002 19:56:45.661415  916052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-460513
	I1002 19:56:45.680967  916052 main.go:141] libmachine: Using SSH client type: native
	I1002 19:56:45.681377  916052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33896 <nil> <nil>}
	I1002 19:56:45.681393  916052 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-460513' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-460513/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-460513' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 19:56:45.817758  916052 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 19:56:45.817777  916052 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-881023/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-881023/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-881023/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-881023/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-881023/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-881023/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-881023/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-881023/.minikube}
	I1002 19:56:45.817798  916052 ubuntu.go:190] setting up certificates
	I1002 19:56:45.817811  916052 provision.go:84] configureAuth start
	I1002 19:56:45.817877  916052 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-460513
	I1002 19:56:45.835746  916052 provision.go:143] copyHostCerts
	I1002 19:56:45.835824  916052 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-881023/.minikube/ca.pem, removing ...
	I1002 19:56:45.835843  916052 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-881023/.minikube/ca.pem
	I1002 19:56:45.835926  916052 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-881023/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-881023/.minikube/ca.pem (1078 bytes)
	I1002 19:56:45.836025  916052 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-881023/.minikube/cert.pem, removing ...
	I1002 19:56:45.836029  916052 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-881023/.minikube/cert.pem
	I1002 19:56:45.836053  916052 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-881023/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-881023/.minikube/cert.pem (1123 bytes)
	I1002 19:56:45.836124  916052 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-881023/.minikube/key.pem, removing ...
	I1002 19:56:45.836128  916052 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-881023/.minikube/key.pem
	I1002 19:56:45.836152  916052 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-881023/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-881023/.minikube/key.pem (1675 bytes)
	I1002 19:56:45.836197  916052 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-881023/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-881023/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-881023/.minikube/certs/ca-key.pem org=jenkins.functional-460513 san=[127.0.0.1 192.168.49.2 functional-460513 localhost minikube]
	I1002 19:56:46.761305  916052 provision.go:177] copyRemoteCerts
	I1002 19:56:46.761360  916052 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 19:56:46.761410  916052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-460513
	I1002 19:56:46.778690  916052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33896 SSHKeyPath:/home/jenkins/minikube-integration/21683-881023/.minikube/machines/functional-460513/id_rsa Username:docker}
	I1002 19:56:46.873703  916052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-881023/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 19:56:46.892842  916052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-881023/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1002 19:56:46.911944  916052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-881023/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 19:56:46.930484  916052 provision.go:87] duration metric: took 1.112649737s to configureAuth
	I1002 19:56:46.930502  916052 ubuntu.go:206] setting minikube options for container-runtime
	I1002 19:56:46.930705  916052 config.go:182] Loaded profile config "functional-460513": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1002 19:56:46.930754  916052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-460513
	I1002 19:56:46.947856  916052 main.go:141] libmachine: Using SSH client type: native
	I1002 19:56:46.948172  916052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33896 <nil> <nil>}
	I1002 19:56:46.948179  916052 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1002 19:56:47.083523  916052 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1002 19:56:47.083534  916052 ubuntu.go:71] root file system type: overlay
	I1002 19:56:47.083643  916052 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1002 19:56:47.083710  916052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-460513
	I1002 19:56:47.104561  916052 main.go:141] libmachine: Using SSH client type: native
	I1002 19:56:47.104875  916052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33896 <nil> <nil>}
	I1002 19:56:47.104954  916052 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1002 19:56:47.263509  916052 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1002 19:56:47.263594  916052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-460513
	I1002 19:56:47.281253  916052 main.go:141] libmachine: Using SSH client type: native
	I1002 19:56:47.281577  916052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33896 <nil> <nil>}
	I1002 19:56:47.281593  916052 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1002 19:56:47.430740  916052 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 19:56:47.430755  916052 machine.go:96] duration metric: took 2.173359689s to provisionDockerMachine
	I1002 19:56:47.430765  916052 start.go:294] postStartSetup for "functional-460513" (driver="docker")
	I1002 19:56:47.430774  916052 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 19:56:47.430837  916052 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 19:56:47.430877  916052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-460513
	I1002 19:56:47.448810  916052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33896 SSHKeyPath:/home/jenkins/minikube-integration/21683-881023/.minikube/machines/functional-460513/id_rsa Username:docker}
	I1002 19:56:47.550492  916052 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 19:56:47.554365  916052 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 19:56:47.554384  916052 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 19:56:47.554395  916052 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-881023/.minikube/addons for local assets ...
	I1002 19:56:47.554456  916052 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-881023/.minikube/files for local assets ...
	I1002 19:56:47.554532  916052 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-881023/.minikube/files/etc/ssl/certs/8828842.pem -> 8828842.pem in /etc/ssl/certs
	I1002 19:56:47.554609  916052 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-881023/.minikube/files/etc/test/nested/copy/882884/hosts -> hosts in /etc/test/nested/copy/882884
	I1002 19:56:47.554658  916052 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/882884
	I1002 19:56:47.563202  916052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-881023/.minikube/files/etc/ssl/certs/8828842.pem --> /etc/ssl/certs/8828842.pem (1708 bytes)
	I1002 19:56:47.591153  916052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-881023/.minikube/files/etc/test/nested/copy/882884/hosts --> /etc/test/nested/copy/882884/hosts (40 bytes)
	I1002 19:56:47.612344  916052 start.go:297] duration metric: took 181.562941ms for postStartSetup
	I1002 19:56:47.612432  916052 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 19:56:47.612483  916052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-460513
	I1002 19:56:47.630560  916052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33896 SSHKeyPath:/home/jenkins/minikube-integration/21683-881023/.minikube/machines/functional-460513/id_rsa Username:docker}
	I1002 19:56:47.727148  916052 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 19:56:47.732297  916052 fix.go:57] duration metric: took 2.51153522s for fixHost
	I1002 19:56:47.732313  916052 start.go:84] releasing machines lock for "functional-460513", held for 2.511577953s
	I1002 19:56:47.732397  916052 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-460513
	I1002 19:56:47.749920  916052 ssh_runner.go:195] Run: cat /version.json
	I1002 19:56:47.749963  916052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-460513
	I1002 19:56:47.750206  916052 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 19:56:47.750254  916052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-460513
	I1002 19:56:47.768735  916052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33896 SSHKeyPath:/home/jenkins/minikube-integration/21683-881023/.minikube/machines/functional-460513/id_rsa Username:docker}
	I1002 19:56:47.768881  916052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33896 SSHKeyPath:/home/jenkins/minikube-integration/21683-881023/.minikube/machines/functional-460513/id_rsa Username:docker}
	I1002 19:56:47.955881  916052 ssh_runner.go:195] Run: systemctl --version
	I1002 19:56:47.962571  916052 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 19:56:47.967138  916052 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 19:56:47.967213  916052 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 19:56:47.975778  916052 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 19:56:47.975795  916052 start.go:496] detecting cgroup driver to use...
	I1002 19:56:47.975828  916052 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 19:56:47.975934  916052 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 19:56:47.990871  916052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1002 19:56:48.002873  916052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1002 19:56:48.016096  916052 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1002 19:56:48.016176  916052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1002 19:56:48.026805  916052 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1002 19:56:48.036288  916052 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1002 19:56:48.046904  916052 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1002 19:56:48.056016  916052 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 19:56:48.064497  916052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1002 19:56:48.074466  916052 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1002 19:56:48.083892  916052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1002 19:56:48.094614  916052 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 19:56:48.102766  916052 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 19:56:48.110993  916052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 19:56:48.256064  916052 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1002 19:56:48.473892  916052 start.go:496] detecting cgroup driver to use...
	I1002 19:56:48.473929  916052 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 19:56:48.473977  916052 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1002 19:56:48.489074  916052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 19:56:48.505599  916052 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 19:56:48.532938  916052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 19:56:48.548902  916052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1002 19:56:48.562324  916052 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 19:56:48.577056  916052 ssh_runner.go:195] Run: which cri-dockerd
	I1002 19:56:48.580858  916052 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1002 19:56:48.588702  916052 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1002 19:56:48.601726  916052 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1002 19:56:48.741632  916052 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1002 19:56:48.881414  916052 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1002 19:56:48.881510  916052 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1002 19:56:48.896863  916052 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1002 19:56:48.910259  916052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 19:56:49.051494  916052 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1002 19:57:20.092321  916052 ssh_runner.go:235] Completed: sudo systemctl restart docker: (31.040803163s)
	I1002 19:57:20.092388  916052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 19:57:20.113529  916052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1002 19:57:20.131321  916052 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1002 19:57:20.165315  916052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1002 19:57:20.179820  916052 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1002 19:57:20.303986  916052 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1002 19:57:20.420937  916052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 19:57:20.546067  916052 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1002 19:57:20.562520  916052 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1002 19:57:20.576159  916052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 19:57:20.708327  916052 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1002 19:57:20.791258  916052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1002 19:57:20.806148  916052 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1002 19:57:20.806204  916052 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1002 19:57:20.810212  916052 start.go:564] Will wait 60s for crictl version
	I1002 19:57:20.810265  916052 ssh_runner.go:195] Run: which crictl
	I1002 19:57:20.813794  916052 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 19:57:20.841754  916052 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I1002 19:57:20.841814  916052 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1002 19:57:20.864797  916052 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1002 19:57:20.890396  916052 out.go:252] * Preparing Kubernetes v1.34.1 on Docker 28.4.0 ...
	I1002 19:57:20.890484  916052 cli_runner.go:164] Run: docker network inspect functional-460513 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 19:57:20.906438  916052 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 19:57:20.913290  916052 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1002 19:57:20.916144  916052 kubeadm.go:883] updating cluster {Name:functional-460513 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-460513 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 19:57:20.916275  916052 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1002 19:57:20.916350  916052 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1002 19:57:20.934155  916052 docker.go:691] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-460513
	registry.k8s.io/kube-apiserver:v1.34.1
	registry.k8s.io/kube-scheduler:v1.34.1
	registry.k8s.io/kube-controller-manager:v1.34.1
	registry.k8s.io/kube-proxy:v1.34.1
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1002 19:57:20.934166  916052 docker.go:621] Images already preloaded, skipping extraction
	I1002 19:57:20.934226  916052 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1002 19:57:20.953899  916052 docker.go:691] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-460513
	registry.k8s.io/kube-apiserver:v1.34.1
	registry.k8s.io/kube-scheduler:v1.34.1
	registry.k8s.io/kube-controller-manager:v1.34.1
	registry.k8s.io/kube-proxy:v1.34.1
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1002 19:57:20.953928  916052 cache_images.go:85] Images are preloaded, skipping loading
	I1002 19:57:20.953936  916052 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 docker true true} ...
	I1002 19:57:20.954035  916052 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-460513 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-460513 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 19:57:20.954104  916052 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1002 19:57:21.009990  916052 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1002 19:57:21.010015  916052 cni.go:84] Creating CNI manager for ""
	I1002 19:57:21.010037  916052 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1002 19:57:21.010045  916052 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 19:57:21.010068  916052 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-460513 NodeName:functional-460513 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:
map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 19:57:21.010199  916052 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-460513"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 19:57:21.010271  916052 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 19:57:21.018636  916052 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 19:57:21.018697  916052 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 19:57:21.026393  916052 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1002 19:57:21.040543  916052 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 19:57:21.053610  916052 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2068 bytes)
	I1002 19:57:21.066875  916052 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 19:57:21.070586  916052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 19:57:21.194457  916052 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 19:57:21.207897  916052 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/functional-460513 for IP: 192.168.49.2
	I1002 19:57:21.207908  916052 certs.go:195] generating shared ca certs ...
	I1002 19:57:21.207923  916052 certs.go:227] acquiring lock for ca certs: {Name:mk8d4e351e81262a2dea8d7403e1df60e121408f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 19:57:21.208078  916052 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-881023/.minikube/ca.key
	I1002 19:57:21.208124  916052 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-881023/.minikube/proxy-client-ca.key
	I1002 19:57:21.208131  916052 certs.go:257] generating profile certs ...
	I1002 19:57:21.208212  916052 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/functional-460513/client.key
	I1002 19:57:21.208255  916052 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/functional-460513/apiserver.key.323dabdf
	I1002 19:57:21.208289  916052 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/functional-460513/proxy-client.key
	I1002 19:57:21.208401  916052 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-881023/.minikube/certs/882884.pem (1338 bytes)
	W1002 19:57:21.208427  916052 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-881023/.minikube/certs/882884_empty.pem, impossibly tiny 0 bytes
	I1002 19:57:21.208434  916052 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-881023/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 19:57:21.208456  916052 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-881023/.minikube/certs/ca.pem (1078 bytes)
	I1002 19:57:21.208476  916052 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-881023/.minikube/certs/cert.pem (1123 bytes)
	I1002 19:57:21.208499  916052 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-881023/.minikube/certs/key.pem (1675 bytes)
	I1002 19:57:21.208539  916052 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-881023/.minikube/files/etc/ssl/certs/8828842.pem (1708 bytes)
	I1002 19:57:21.209132  916052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-881023/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 19:57:21.229021  916052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-881023/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 19:57:21.247707  916052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-881023/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 19:57:21.266163  916052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-881023/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 19:57:21.284542  916052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/functional-460513/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 19:57:21.302904  916052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/functional-460513/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 19:57:21.320596  916052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/functional-460513/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 19:57:21.339003  916052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/functional-460513/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 19:57:21.356332  916052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-881023/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 19:57:21.373604  916052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-881023/.minikube/certs/882884.pem --> /usr/share/ca-certificates/882884.pem (1338 bytes)
	I1002 19:57:21.391343  916052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-881023/.minikube/files/etc/ssl/certs/8828842.pem --> /usr/share/ca-certificates/8828842.pem (1708 bytes)
	I1002 19:57:21.409748  916052 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 19:57:21.423179  916052 ssh_runner.go:195] Run: openssl version
	I1002 19:57:21.429874  916052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 19:57:21.438833  916052 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 19:57:21.443683  916052 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 19:48 /usr/share/ca-certificates/minikubeCA.pem
	I1002 19:57:21.443743  916052 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 19:57:21.485942  916052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 19:57:21.494107  916052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/882884.pem && ln -fs /usr/share/ca-certificates/882884.pem /etc/ssl/certs/882884.pem"
	I1002 19:57:21.502433  916052 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/882884.pem
	I1002 19:57:21.506367  916052 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 19:54 /usr/share/ca-certificates/882884.pem
	I1002 19:57:21.506427  916052 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/882884.pem
	I1002 19:57:21.547745  916052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/882884.pem /etc/ssl/certs/51391683.0"
	I1002 19:57:21.555870  916052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8828842.pem && ln -fs /usr/share/ca-certificates/8828842.pem /etc/ssl/certs/8828842.pem"
	I1002 19:57:21.564255  916052 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8828842.pem
	I1002 19:57:21.567967  916052 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 19:54 /usr/share/ca-certificates/8828842.pem
	I1002 19:57:21.568038  916052 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8828842.pem
	I1002 19:57:21.609384  916052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8828842.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 19:57:21.617752  916052 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 19:57:21.621662  916052 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 19:57:21.663057  916052 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 19:57:21.704251  916052 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 19:57:21.745408  916052 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 19:57:21.787171  916052 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 19:57:21.828943  916052 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 19:57:21.871143  916052 kubeadm.go:400] StartCluster: {Name:functional-460513 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-460513 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 19:57:21.871327  916052 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1002 19:57:21.889849  916052 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 19:57:21.897839  916052 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 19:57:21.897861  916052 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 19:57:21.897925  916052 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 19:57:21.905623  916052 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 19:57:21.906151  916052 kubeconfig.go:125] found "functional-460513" server: "https://192.168.49.2:8441"
	I1002 19:57:21.907409  916052 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 19:57:21.915470  916052 kubeadm.go:644] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-10-02 19:54:45.719399576 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-10-02 19:57:21.064854999 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1002 19:57:21.916062  916052 kubeadm.go:1160] stopping kube-system containers ...
	I1002 19:57:21.916132  916052 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1002 19:57:21.951612  916052 docker.go:484] Stopping containers: [3b07bcd4bc1c 11843acc93b8 5e610b6f5c95 8013cb97c756 5459180499bc 4805d040cabc cccbefc54d3c 14d19245da30 d6bece758620 c216e22a818d 6fa6f9b610fc c60049a22e15 b7d3e4afda29 b61a48faa350 d194472b4bf8 6867bc4873bc 0b3cf02b86e8 ad2ef11072e0 b285b18b89de e1dc10703a0c b6ee35384076 20724a7ef1a3 8072c3662037 bf25eae9bbc2 f788e75d8dee 00d45b54f98d 818052c1c08f bf2907225e1a]
	I1002 19:57:21.951694  916052 ssh_runner.go:195] Run: docker stop 3b07bcd4bc1c 11843acc93b8 5e610b6f5c95 8013cb97c756 5459180499bc 4805d040cabc cccbefc54d3c 14d19245da30 d6bece758620 c216e22a818d 6fa6f9b610fc c60049a22e15 b7d3e4afda29 b61a48faa350 d194472b4bf8 6867bc4873bc 0b3cf02b86e8 ad2ef11072e0 b285b18b89de e1dc10703a0c b6ee35384076 20724a7ef1a3 8072c3662037 bf25eae9bbc2 f788e75d8dee 00d45b54f98d 818052c1c08f bf2907225e1a
	I1002 19:57:21.978611  916052 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1002 19:57:22.097786  916052 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 19:57:22.106238  916052 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5635 Oct  2 19:54 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Oct  2 19:54 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1972 Oct  2 19:55 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Oct  2 19:54 /etc/kubernetes/scheduler.conf
	
	I1002 19:57:22.106300  916052 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1002 19:57:22.114917  916052 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1002 19:57:22.122813  916052 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1002 19:57:22.122871  916052 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 19:57:22.130373  916052 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1002 19:57:22.138390  916052 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1002 19:57:22.138450  916052 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 19:57:22.146124  916052 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1002 19:57:22.154249  916052 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1002 19:57:22.154303  916052 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 19:57:22.162365  916052 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 19:57:22.170405  916052 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 19:57:22.218615  916052 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 19:57:25.659891  916052 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (3.441251725s)
	I1002 19:57:25.659949  916052 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1002 19:57:25.886420  916052 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 19:57:25.946021  916052 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1002 19:57:26.046592  916052 api_server.go:52] waiting for apiserver process to appear ...
	I1002 19:57:26.046657  916052 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 19:57:26.546823  916052 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 19:57:27.047094  916052 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 19:57:27.547732  916052 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 19:57:27.589854  916052 api_server.go:72] duration metric: took 1.543274062s to wait for apiserver process to appear ...
	I1002 19:57:27.589868  916052 api_server.go:88] waiting for apiserver healthz status ...
	I1002 19:57:27.589885  916052 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1002 19:57:31.753262  916052 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 19:57:31.753279  916052 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 19:57:31.753295  916052 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1002 19:57:31.793943  916052 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 19:57:31.793957  916052 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 19:57:32.090337  916052 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1002 19:57:32.108978  916052 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1002 19:57:32.108995  916052 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1002 19:57:32.590330  916052 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1002 19:57:32.605209  916052 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1002 19:57:32.605225  916052 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1002 19:57:33.090888  916052 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1002 19:57:33.107580  916052 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1002 19:57:33.107598  916052 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1002 19:57:33.590003  916052 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1002 19:57:33.598674  916052 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1002 19:57:33.614383  916052 api_server.go:141] control plane version: v1.34.1
	I1002 19:57:33.614405  916052 api_server.go:131] duration metric: took 6.024531606s to wait for apiserver health ...
	I1002 19:57:33.614412  916052 cni.go:84] Creating CNI manager for ""
	I1002 19:57:33.614423  916052 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1002 19:57:33.617895  916052 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1002 19:57:33.621799  916052 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1002 19:57:33.632589  916052 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1002 19:57:33.649519  916052 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 19:57:33.656345  916052 system_pods.go:59] 7 kube-system pods found
	I1002 19:57:33.656372  916052 system_pods.go:61] "coredns-66bc5c9577-bb2ds" [bdb4fde2-1aa4-4469-9305-1f834deb7ff8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 19:57:33.656380  916052 system_pods.go:61] "etcd-functional-460513" [f75d720e-a504-4bfd-8064-f8329be4327c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 19:57:33.656389  916052 system_pods.go:61] "kube-apiserver-functional-460513" [fdb94b61-e4dd-4b3e-b049-a1295ba98edc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 19:57:33.656395  916052 system_pods.go:61] "kube-controller-manager-functional-460513" [72f69cd0-764b-4547-b465-9baa94361de4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 19:57:33.656400  916052 system_pods.go:61] "kube-proxy-z7ghw" [06232d19-d9a5-4182-8cc3-cee16711fdd0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1002 19:57:33.656406  916052 system_pods.go:61] "kube-scheduler-functional-460513" [ea9bdc32-dab6-45d9-b411-f651b3fe6792] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 19:57:33.656418  916052 system_pods.go:61] "storage-provisioner" [d5b35c47-b886-46a7-8a67-e8931dcb50e9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 19:57:33.656430  916052 system_pods.go:74] duration metric: took 6.895385ms to wait for pod list to return data ...
	I1002 19:57:33.656438  916052 node_conditions.go:102] verifying NodePressure condition ...
	I1002 19:57:33.660091  916052 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 19:57:33.660112  916052 node_conditions.go:123] node cpu capacity is 2
	I1002 19:57:33.660122  916052 node_conditions.go:105] duration metric: took 3.680273ms to run NodePressure ...
	I1002 19:57:33.660193  916052 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 19:57:33.918886  916052 kubeadm.go:728] waiting for restarted kubelet to initialise ...
	I1002 19:57:33.927198  916052 kubeadm.go:743] kubelet initialised
	I1002 19:57:33.927209  916052 kubeadm.go:744] duration metric: took 8.310069ms waiting for restarted kubelet to initialise ...
	I1002 19:57:33.927223  916052 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 19:57:33.943942  916052 ops.go:34] apiserver oom_adj: -16
	I1002 19:57:33.943953  916052 kubeadm.go:601] duration metric: took 12.046087277s to restartPrimaryControlPlane
	I1002 19:57:33.943961  916052 kubeadm.go:402] duration metric: took 12.072833528s to StartCluster
	I1002 19:57:33.944001  916052 settings.go:142] acquiring lock: {Name:mkac64c41b7df7147c80d0babd35e7ac38a28788 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 19:57:33.944090  916052 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-881023/kubeconfig
	I1002 19:57:33.944801  916052 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-881023/kubeconfig: {Name:mkf664a3dda5a2a163069a10b087ab5cefd54246 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 19:57:33.945277  916052 config.go:182] Loaded profile config "functional-460513": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1002 19:57:33.945328  916052 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1002 19:57:33.945373  916052 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 19:57:33.945617  916052 addons.go:69] Setting storage-provisioner=true in profile "functional-460513"
	I1002 19:57:33.945642  916052 addons.go:238] Setting addon storage-provisioner=true in "functional-460513"
	W1002 19:57:33.945648  916052 addons.go:247] addon storage-provisioner should already be in state true
	I1002 19:57:33.945671  916052 host.go:66] Checking if "functional-460513" exists ...
	I1002 19:57:33.945677  916052 addons.go:69] Setting default-storageclass=true in profile "functional-460513"
	I1002 19:57:33.945690  916052 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-460513"
	I1002 19:57:33.945979  916052 cli_runner.go:164] Run: docker container inspect functional-460513 --format={{.State.Status}}
	I1002 19:57:33.946101  916052 cli_runner.go:164] Run: docker container inspect functional-460513 --format={{.State.Status}}
	I1002 19:57:33.951055  916052 out.go:179] * Verifying Kubernetes components...
	I1002 19:57:33.954208  916052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 19:57:34.001674  916052 addons.go:238] Setting addon default-storageclass=true in "functional-460513"
	W1002 19:57:34.001687  916052 addons.go:247] addon default-storageclass should already be in state true
	I1002 19:57:34.001713  916052 host.go:66] Checking if "functional-460513" exists ...
	I1002 19:57:34.002140  916052 cli_runner.go:164] Run: docker container inspect functional-460513 --format={{.State.Status}}
	I1002 19:57:34.004156  916052 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 19:57:34.007313  916052 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 19:57:34.007326  916052 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 19:57:34.007400  916052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-460513
	I1002 19:57:34.037362  916052 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 19:57:34.037375  916052 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 19:57:34.037433  916052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-460513
	I1002 19:57:34.037628  916052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33896 SSHKeyPath:/home/jenkins/minikube-integration/21683-881023/.minikube/machines/functional-460513/id_rsa Username:docker}
	I1002 19:57:34.066778  916052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33896 SSHKeyPath:/home/jenkins/minikube-integration/21683-881023/.minikube/machines/functional-460513/id_rsa Username:docker}
	I1002 19:57:34.242902  916052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 19:57:34.272147  916052 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 19:57:34.275842  916052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 19:57:35.146371  916052 node_ready.go:35] waiting up to 6m0s for node "functional-460513" to be "Ready" ...
	I1002 19:57:35.158464  916052 node_ready.go:49] node "functional-460513" is "Ready"
	I1002 19:57:35.158482  916052 node_ready.go:38] duration metric: took 12.092736ms for node "functional-460513" to be "Ready" ...
	I1002 19:57:35.158498  916052 api_server.go:52] waiting for apiserver process to appear ...
	I1002 19:57:35.158573  916052 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 19:57:35.176741  916052 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1002 19:57:35.179540  916052 addons.go:514] duration metric: took 1.23414396s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1002 19:57:35.183749  916052 api_server.go:72] duration metric: took 1.238397876s to wait for apiserver process to appear ...
	I1002 19:57:35.183764  916052 api_server.go:88] waiting for apiserver healthz status ...
	I1002 19:57:35.183792  916052 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1002 19:57:35.193637  916052 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1002 19:57:35.194833  916052 api_server.go:141] control plane version: v1.34.1
	I1002 19:57:35.194847  916052 api_server.go:131] duration metric: took 11.078122ms to wait for apiserver health ...
	I1002 19:57:35.194855  916052 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 19:57:35.198698  916052 system_pods.go:59] 7 kube-system pods found
	I1002 19:57:35.198718  916052 system_pods.go:61] "coredns-66bc5c9577-bb2ds" [bdb4fde2-1aa4-4469-9305-1f834deb7ff8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 19:57:35.198725  916052 system_pods.go:61] "etcd-functional-460513" [f75d720e-a504-4bfd-8064-f8329be4327c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 19:57:35.198732  916052 system_pods.go:61] "kube-apiserver-functional-460513" [fdb94b61-e4dd-4b3e-b049-a1295ba98edc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 19:57:35.198738  916052 system_pods.go:61] "kube-controller-manager-functional-460513" [72f69cd0-764b-4547-b465-9baa94361de4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 19:57:35.198742  916052 system_pods.go:61] "kube-proxy-z7ghw" [06232d19-d9a5-4182-8cc3-cee16711fdd0] Running
	I1002 19:57:35.198747  916052 system_pods.go:61] "kube-scheduler-functional-460513" [ea9bdc32-dab6-45d9-b411-f651b3fe6792] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 19:57:35.198750  916052 system_pods.go:61] "storage-provisioner" [d5b35c47-b886-46a7-8a67-e8931dcb50e9] Running
	I1002 19:57:35.198755  916052 system_pods.go:74] duration metric: took 3.895955ms to wait for pod list to return data ...
	I1002 19:57:35.198762  916052 default_sa.go:34] waiting for default service account to be created ...
	I1002 19:57:35.202440  916052 default_sa.go:45] found service account: "default"
	I1002 19:57:35.202453  916052 default_sa.go:55] duration metric: took 3.686961ms for default service account to be created ...
	I1002 19:57:35.202461  916052 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 19:57:35.205491  916052 system_pods.go:86] 7 kube-system pods found
	I1002 19:57:35.205509  916052 system_pods.go:89] "coredns-66bc5c9577-bb2ds" [bdb4fde2-1aa4-4469-9305-1f834deb7ff8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 19:57:35.205516  916052 system_pods.go:89] "etcd-functional-460513" [f75d720e-a504-4bfd-8064-f8329be4327c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 19:57:35.205524  916052 system_pods.go:89] "kube-apiserver-functional-460513" [fdb94b61-e4dd-4b3e-b049-a1295ba98edc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 19:57:35.205529  916052 system_pods.go:89] "kube-controller-manager-functional-460513" [72f69cd0-764b-4547-b465-9baa94361de4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 19:57:35.205533  916052 system_pods.go:89] "kube-proxy-z7ghw" [06232d19-d9a5-4182-8cc3-cee16711fdd0] Running
	I1002 19:57:35.205538  916052 system_pods.go:89] "kube-scheduler-functional-460513" [ea9bdc32-dab6-45d9-b411-f651b3fe6792] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 19:57:35.205543  916052 system_pods.go:89] "storage-provisioner" [d5b35c47-b886-46a7-8a67-e8931dcb50e9] Running
	I1002 19:57:35.205548  916052 system_pods.go:126] duration metric: took 3.083558ms to wait for k8s-apps to be running ...
	I1002 19:57:35.205556  916052 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 19:57:35.205612  916052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 19:57:35.221525  916052 system_svc.go:56] duration metric: took 15.959021ms WaitForService to wait for kubelet
	I1002 19:57:35.221542  916052 kubeadm.go:586] duration metric: took 1.276196263s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 19:57:35.221559  916052 node_conditions.go:102] verifying NodePressure condition ...
	I1002 19:57:35.232548  916052 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 19:57:35.232564  916052 node_conditions.go:123] node cpu capacity is 2
	I1002 19:57:35.232575  916052 node_conditions.go:105] duration metric: took 11.01106ms to run NodePressure ...
	I1002 19:57:35.232586  916052 start.go:242] waiting for startup goroutines ...
	I1002 19:57:35.232592  916052 start.go:247] waiting for cluster config update ...
	I1002 19:57:35.232602  916052 start.go:256] writing updated cluster config ...
	I1002 19:57:35.232921  916052 ssh_runner.go:195] Run: rm -f paused
	I1002 19:57:35.236989  916052 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 19:57:35.298150  916052 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-bb2ds" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 19:57:37.303478  916052 pod_ready.go:94] pod "coredns-66bc5c9577-bb2ds" is "Ready"
	I1002 19:57:37.303492  916052 pod_ready.go:86] duration metric: took 2.005326603s for pod "coredns-66bc5c9577-bb2ds" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 19:57:37.306313  916052 pod_ready.go:83] waiting for pod "etcd-functional-460513" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 19:57:38.812798  916052 pod_ready.go:94] pod "etcd-functional-460513" is "Ready"
	I1002 19:57:38.812812  916052 pod_ready.go:86] duration metric: took 1.50648697s for pod "etcd-functional-460513" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 19:57:38.815756  916052 pod_ready.go:83] waiting for pod "kube-apiserver-functional-460513" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 19:57:39.821905  916052 pod_ready.go:94] pod "kube-apiserver-functional-460513" is "Ready"
	I1002 19:57:39.821919  916052 pod_ready.go:86] duration metric: took 1.006150794s for pod "kube-apiserver-functional-460513" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 19:57:39.824446  916052 pod_ready.go:83] waiting for pod "kube-controller-manager-functional-460513" in "kube-system" namespace to be "Ready" or be gone ...
	W1002 19:57:41.829719  916052 pod_ready.go:104] pod "kube-controller-manager-functional-460513" is not "Ready", error: <nil>
	I1002 19:57:43.830373  916052 pod_ready.go:94] pod "kube-controller-manager-functional-460513" is "Ready"
	I1002 19:57:43.830387  916052 pod_ready.go:86] duration metric: took 4.005928031s for pod "kube-controller-manager-functional-460513" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 19:57:43.832749  916052 pod_ready.go:83] waiting for pod "kube-proxy-z7ghw" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 19:57:43.836888  916052 pod_ready.go:94] pod "kube-proxy-z7ghw" is "Ready"
	I1002 19:57:43.836901  916052 pod_ready.go:86] duration metric: took 4.13947ms for pod "kube-proxy-z7ghw" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 19:57:43.839025  916052 pod_ready.go:83] waiting for pod "kube-scheduler-functional-460513" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 19:57:43.902322  916052 pod_ready.go:94] pod "kube-scheduler-functional-460513" is "Ready"
	I1002 19:57:43.902337  916052 pod_ready.go:86] duration metric: took 63.300304ms for pod "kube-scheduler-functional-460513" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 19:57:43.902348  916052 pod_ready.go:40] duration metric: took 8.665336389s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 19:57:43.959849  916052 start.go:627] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1002 19:57:43.962845  916052 out.go:179] * Done! kubectl is now configured to use "functional-460513" cluster and "default" namespace by default
	
	
	==> Docker <==
	Oct 02 19:57:32 functional-460513 cri-dockerd[7470]: time="2025-10-02T19:57:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/afeaaf344747b2d0605368a61849e92006251202a4caf7e071f6d4c2cb1f9fd9/resolv.conf as [nameserver 192.168.49.1 search us-east-2.compute.internal options edns0 trust-ad ndots:0]"
	Oct 02 19:57:33 functional-460513 cri-dockerd[7470]: time="2025-10-02T19:57:33Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ed81eec2d77f8a2ce04af59e640ea1dff73d806a50f6c2d4a3531fbc6319b521/resolv.conf as [nameserver 192.168.49.1 search us-east-2.compute.internal options edns0 trust-ad ndots:0]"
	Oct 02 19:57:33 functional-460513 cri-dockerd[7470]: time="2025-10-02T19:57:33Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3a14beb9805ac0f1db8daedb86fe5802059bc6fa86c0ffef5c854583e80b9bf9/resolv.conf as [nameserver 192.168.49.1 search us-east-2.compute.internal options trust-ad ndots:0 edns0]"
	Oct 02 19:57:47 functional-460513 cri-dockerd[7470]: time="2025-10-02T19:57:47Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5f81cd2cf859c41992ae23fa25c5a009e7884a163623d644d3582d657b98f1d9/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Oct 02 19:57:47 functional-460513 dockerd[6691]: time="2025-10-02T19:57:47.805554553Z" level=error msg="Not continuing with pull after error" error="errors:\ndenied: requested access to the resource is denied\nunauthorized: authentication required\n"
	Oct 02 19:57:47 functional-460513 dockerd[6691]: time="2025-10-02T19:57:47.805610308Z" level=info msg="Ignoring extra error returned from registry" error="unauthorized: authentication required"
	Oct 02 19:57:48 functional-460513 dockerd[6691]: time="2025-10-02T19:57:48.730544158Z" level=info msg="ignoring event" container=5f81cd2cf859c41992ae23fa25c5a009e7884a163623d644d3582d657b98f1d9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 02 19:57:48 functional-460513 cri-dockerd[7470]: time="2025-10-02T19:57:48Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8d05fb5094fbcfc964aa754c62296da626a40af4a46d6e88906a35d5a5761e28/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Oct 02 19:57:50 functional-460513 dockerd[6691]: time="2025-10-02T19:57:50.730115419Z" level=info msg="ignoring event" container=8d05fb5094fbcfc964aa754c62296da626a40af4a46d6e88906a35d5a5761e28 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 02 19:57:54 functional-460513 cri-dockerd[7470]: time="2025-10-02T19:57:54Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9789396a1419a07d6bf41a27c95afd46a0d078abfcee2681973de39335d9c661/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Oct 02 19:57:56 functional-460513 cri-dockerd[7470]: time="2025-10-02T19:57:56Z" level=info msg="Stop pulling image docker.io/nginx:alpine: Status: Downloaded newer image for nginx:alpine"
	Oct 02 19:58:00 functional-460513 cri-dockerd[7470]: time="2025-10-02T19:58:00Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/772896884b95d8f7cbd6a565e9165f035c57694b1feee0c552941f2e258fcd97/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Oct 02 19:58:00 functional-460513 dockerd[6691]: time="2025-10-02T19:58:00.835667159Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 19:58:03 functional-460513 cri-dockerd[7470]: time="2025-10-02T19:58:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fba126debd01c89d3f6e6c837d5a21ba49c98f387b8be3c5ce7b33bc3d8ab693/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Oct 02 19:58:03 functional-460513 dockerd[6691]: time="2025-10-02T19:58:03.384527077Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 19:58:13 functional-460513 dockerd[6691]: time="2025-10-02T19:58:13.307124277Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 19:58:14 functional-460513 dockerd[6691]: time="2025-10-02T19:58:14.331789963Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 19:58:35 functional-460513 dockerd[6691]: time="2025-10-02T19:58:35.412865557Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 19:58:35 functional-460513 cri-dockerd[7470]: time="2025-10-02T19:58:35Z" level=info msg="Stop pulling image docker.io/nginx:latest: latest: Pulling from library/nginx"
	Oct 02 19:58:44 functional-460513 dockerd[6691]: time="2025-10-02T19:58:44.318848656Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 19:59:17 functional-460513 dockerd[6691]: time="2025-10-02T19:59:17.332076152Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 19:59:35 functional-460513 dockerd[6691]: time="2025-10-02T19:59:35.325995031Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 20:00:48 functional-460513 dockerd[6691]: time="2025-10-02T20:00:48.471468567Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 20:00:48 functional-460513 cri-dockerd[7470]: time="2025-10-02T20:00:48Z" level=info msg="Stop pulling image docker.io/nginx:latest: latest: Pulling from library/nginx"
	Oct 02 20:01:06 functional-460513 dockerd[6691]: time="2025-10-02T20:01:06.335849446Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                           CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	bc256734b9fe8       nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8   4 minutes ago       Running             nginx                     0                   9789396a1419a       nginx-svc                                   default
	fe224927b52c5       05baa95f5142d                                                                   4 minutes ago       Running             kube-proxy                2                   3a14beb9805ac       kube-proxy-z7ghw                            kube-system
	33f4fa437242e       ba04bb24b9575                                                                   4 minutes ago       Running             storage-provisioner       2                   ed81eec2d77f8       storage-provisioner                         kube-system
	02510aec7c38d       138784d87c9c5                                                                   4 minutes ago       Running             coredns                   2                   afeaaf344747b       coredns-66bc5c9577-bb2ds                    kube-system
	26134bc61f5d9       a1894772a478e                                                                   4 minutes ago       Running             etcd                      2                   4cf8116593459       etcd-functional-460513                      kube-system
	db9b1101b76eb       43911e833d64d                                                                   4 minutes ago       Running             kube-apiserver            0                   29b883084068a       kube-apiserver-functional-460513            kube-system
	0e09036d7add9       b5f57ec6b9867                                                                   4 minutes ago       Running             kube-scheduler            2                   bfb48aab841d2       kube-scheduler-functional-460513            kube-system
	5d710be832df0       7eb2c6ff0c5a7                                                                   4 minutes ago       Running             kube-controller-manager   2                   605575f71f812       kube-controller-manager-functional-460513   kube-system
	11843acc93b83       138784d87c9c5                                                                   5 minutes ago       Exited              coredns                   1                   c216e22a818d3       coredns-66bc5c9577-bb2ds                    kube-system
	5e610b6f5c956       ba04bb24b9575                                                                   5 minutes ago       Exited              storage-provisioner       1                   d6bece758620b       storage-provisioner                         kube-system
	8013cb97c756c       05baa95f5142d                                                                   5 minutes ago       Exited              kube-proxy                1                   6fa6f9b610fc1       kube-proxy-z7ghw                            kube-system
	5459180499bcd       b5f57ec6b9867                                                                   5 minutes ago       Exited              kube-scheduler            1                   14d19245da307       kube-scheduler-functional-460513            kube-system
	4805d040cabcf       a1894772a478e                                                                   5 minutes ago       Exited              etcd                      1                   b61a48faa350c       etcd-functional-460513                      kube-system
	cccbefc54d3cd       7eb2c6ff0c5a7                                                                   5 minutes ago       Exited              kube-controller-manager   1                   b7d3e4afda29d       kube-controller-manager-functional-460513   kube-system
	
	
	==> coredns [02510aec7c38] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34079 - 21240 "HINFO IN 244857414700627593.4635503374353347991. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.021849786s
	
	
	==> coredns [11843acc93b8] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43425 - 16476 "HINFO IN 6420058890467486523.4324477465014152588. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020251933s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-460513
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-460513
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=77ec36ba275b9c51be897969b75e45bdcce52b4b
	                    minikube.k8s.io/name=functional-460513
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T19_55_07_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 19:55:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-460513
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 20:01:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 19:58:32 +0000   Thu, 02 Oct 2025 19:54:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 19:58:32 +0000   Thu, 02 Oct 2025 19:54:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 19:58:32 +0000   Thu, 02 Oct 2025 19:54:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 19:58:32 +0000   Thu, 02 Oct 2025 19:55:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-460513
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 383087b4c8744483b09343609d84322f
	  System UUID:                5b6ef310-3cb5-4b1c-978f-45f181f323cd
	  Boot ID:                    0abe58db-3afd-40ad-9a63-2ed98334b343
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-connect-7d85dfc575-85j8h          0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m59s
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m7s
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	  kube-system                 coredns-66bc5c9577-bb2ds                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     6m49s
	  kube-system                 etcd-functional-460513                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         6m54s
	  kube-system                 kube-apiserver-functional-460513             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m29s
	  kube-system                 kube-controller-manager-functional-460513    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m54s
	  kube-system                 kube-proxy-z7ghw                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m49s
	  kube-system                 kube-scheduler-functional-460513             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m54s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (2%)  170Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 6m47s                  kube-proxy       
	  Normal   Starting                 4m27s                  kube-proxy       
	  Normal   Starting                 5m34s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  7m3s (x8 over 7m3s)    kubelet          Node functional-460513 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m3s (x8 over 7m3s)    kubelet          Node functional-460513 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m3s (x7 over 7m3s)    kubelet          Node functional-460513 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  7m3s                   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     6m54s                  kubelet          Node functional-460513 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 6m54s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  6m54s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  6m54s                  kubelet          Node functional-460513 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m54s                  kubelet          Node functional-460513 status is now: NodeHasNoDiskPressure
	  Normal   NodeReady                6m54s                  kubelet          Node functional-460513 status is now: NodeReady
	  Normal   Starting                 6m54s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           6m50s                  node-controller  Node functional-460513 event: Registered Node functional-460513 in Controller
	  Normal   RegisteredNode           5m32s                  node-controller  Node functional-460513 event: Registered Node functional-460513 in Controller
	  Warning  ContainerGCFailed        4m54s (x2 over 5m54s)  kubelet          rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	  Normal   Starting                 4m36s                  kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    4m35s (x8 over 4m35s)  kubelet          Node functional-460513 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  4m35s (x8 over 4m35s)  kubelet          Node functional-460513 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 4m35s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     4m35s (x7 over 4m35s)  kubelet          Node functional-460513 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  4m35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           4m26s                  node-controller  Node functional-460513 event: Registered Node functional-460513 in Controller
	
	
	==> dmesg <==
	[Oct 2 18:16] kauditd_printk_skb: 8 callbacks suppressed
	[Oct 2 19:46] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [26134bc61f5d] <==
	{"level":"warn","ts":"2025-10-02T19:57:30.112346Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T19:57:30.135910Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T19:57:30.154947Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T19:57:30.172696Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T19:57:30.203564Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T19:57:30.231449Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T19:57:30.255543Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T19:57:30.266504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T19:57:30.329264Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T19:57:30.337464Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T19:57:30.364171Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T19:57:30.390608Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T19:57:30.429996Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T19:57:30.451770Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T19:57:30.474863Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T19:57:30.502871Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T19:57:30.549653Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T19:57:30.571696Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T19:57:30.604458Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T19:57:30.632445Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T19:57:30.711817Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T19:57:30.749386Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T19:57:30.773166Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T19:57:30.809665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T19:57:30.895187Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44136","server-name":"","error":"EOF"}
	
	
	==> etcd [4805d040cabc] <==
	{"level":"warn","ts":"2025-10-02T19:56:24.649466Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T19:56:24.669130Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T19:56:24.692938Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T19:56:24.722613Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T19:56:24.742087Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T19:56:24.758015Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T19:56:24.852099Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56828","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-02T19:57:09.748218Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-02T19:57:09.748293Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-460513","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-10-02T19:57:09.748479Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-02T19:57:16.751072Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-02T19:57:16.753295Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T19:57:16.753526Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-10-02T19:57:16.754984Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-02T19:57:16.755181Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-02T19:57:16.756496Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-02T19:57:16.756697Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-02T19:57:16.756785Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-02T19:57:16.756962Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-02T19:57:16.757051Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-02T19:57:16.757145Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T19:57:16.760135Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-02T19:57:16.760313Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T19:57:16.760384Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-02T19:57:16.760510Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-460513","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 20:02:01 up  4:44,  0 user,  load average: 0.40, 1.00, 1.79
	Linux functional-460513 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [db9b1101b76e] <==
	I1002 19:57:31.881895       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 19:57:31.910109       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1002 19:57:31.910420       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1002 19:57:31.920008       1 cache.go:39] Caches are synced for autoregister controller
	I1002 19:57:31.921093       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1002 19:57:31.921322       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1002 19:57:31.921533       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1002 19:57:31.921674       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1002 19:57:31.921966       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1002 19:57:31.927546       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1002 19:57:31.928922       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 19:57:31.930128       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1002 19:57:32.120722       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1002 19:57:32.628068       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1002 19:57:33.145257       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1002 19:57:33.146828       1 controller.go:667] quota admission added evaluator for: endpoints
	I1002 19:57:33.160804       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1002 19:57:33.783270       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1002 19:57:33.834297       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1002 19:57:33.873574       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 19:57:33.886358       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 19:57:35.224249       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1002 19:57:47.082177       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.105.248.125"}
	I1002 19:57:54.116376       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.111.89.238"}
	I1002 19:58:02.716762       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.98.203.206"}
	
	
	==> kube-controller-manager [5d710be832df] <==
	I1002 19:57:35.170845       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1002 19:57:35.170870       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1002 19:57:35.172214       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1002 19:57:35.172205       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 19:57:35.176943       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1002 19:57:35.178569       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 19:57:35.178753       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1002 19:57:35.178865       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1002 19:57:35.184345       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1002 19:57:35.186679       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1002 19:57:35.189823       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1002 19:57:35.193902       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1002 19:57:35.195049       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1002 19:57:35.199815       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1002 19:57:35.210359       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1002 19:57:35.210452       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1002 19:57:35.213251       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1002 19:57:35.213478       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1002 19:57:35.213609       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1002 19:57:35.216464       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1002 19:57:35.216526       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1002 19:57:35.216569       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1002 19:57:35.217285       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 19:57:35.219381       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1002 19:57:35.224560       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-controller-manager [cccbefc54d3c] <==
	I1002 19:56:29.302392       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1002 19:56:29.307085       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1002 19:56:29.310379       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1002 19:56:29.319631       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 19:56:29.322784       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1002 19:56:29.326064       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1002 19:56:29.328302       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1002 19:56:29.332298       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1002 19:56:29.332513       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1002 19:56:29.332362       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1002 19:56:29.332660       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1002 19:56:29.332336       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1002 19:56:29.333765       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1002 19:56:29.333841       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1002 19:56:29.337080       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1002 19:56:29.339430       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1002 19:56:29.339806       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1002 19:56:29.339958       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1002 19:56:29.340109       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1002 19:56:29.340280       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1002 19:56:29.340402       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1002 19:56:29.342935       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1002 19:56:29.345450       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1002 19:56:29.348097       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1002 19:56:29.368572       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	
	
	==> kube-proxy [8013cb97c756] <==
	I1002 19:56:26.293475       1 server_linux.go:53] "Using iptables proxy"
	I1002 19:56:26.542177       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 19:56:26.642813       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 19:56:26.642867       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1002 19:56:26.642965       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 19:56:26.881417       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 19:56:26.881477       1 server_linux.go:132] "Using iptables Proxier"
	I1002 19:56:26.953908       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 19:56:26.969663       1 server.go:527] "Version info" version="v1.34.1"
	I1002 19:56:26.969688       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 19:56:26.978549       1 config.go:200] "Starting service config controller"
	I1002 19:56:26.978578       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 19:56:27.018283       1 config.go:106] "Starting endpoint slice config controller"
	I1002 19:56:27.018304       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 19:56:27.018327       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 19:56:27.018332       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 19:56:27.018825       1 config.go:309] "Starting node config controller"
	I1002 19:56:27.018838       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 19:56:27.018845       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 19:56:27.079689       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 19:56:27.118988       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 19:56:27.119021       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [fe224927b52c] <==
	I1002 19:57:33.648761       1 server_linux.go:53] "Using iptables proxy"
	I1002 19:57:33.759095       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 19:57:33.860901       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 19:57:33.862375       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1002 19:57:33.862589       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 19:57:33.950063       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 19:57:33.953446       1 server_linux.go:132] "Using iptables Proxier"
	I1002 19:57:33.978599       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 19:57:33.978920       1 server.go:527] "Version info" version="v1.34.1"
	I1002 19:57:33.978939       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 19:57:33.981527       1 config.go:200] "Starting service config controller"
	I1002 19:57:33.981550       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 19:57:33.983742       1 config.go:106] "Starting endpoint slice config controller"
	I1002 19:57:33.983757       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 19:57:33.983780       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 19:57:33.983784       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 19:57:33.984516       1 config.go:309] "Starting node config controller"
	I1002 19:57:33.984523       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 19:57:33.984530       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 19:57:34.082480       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 19:57:34.085275       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 19:57:34.085312       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [0e09036d7add] <==
	I1002 19:57:31.457845       1 serving.go:386] Generated self-signed cert in-memory
	I1002 19:57:33.347221       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1002 19:57:33.347258       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 19:57:33.354010       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1002 19:57:33.354105       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1002 19:57:33.354127       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1002 19:57:33.356128       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1002 19:57:33.366203       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 19:57:33.366227       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 19:57:33.366246       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 19:57:33.366252       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 19:57:33.454928       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1002 19:57:33.467906       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 19:57:33.467992       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [5459180499bc] <==
	I1002 19:56:24.387641       1 serving.go:386] Generated self-signed cert in-memory
	I1002 19:56:26.440161       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1002 19:56:26.440199       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 19:56:26.454187       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1002 19:56:26.454290       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1002 19:56:26.455734       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1002 19:56:26.461260       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1002 19:56:26.461736       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 19:56:26.461761       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 19:56:26.461780       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 19:56:26.461791       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 19:56:26.562274       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 19:56:26.562343       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1002 19:56:26.562445       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 19:57:09.734861       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1002 19:57:09.734884       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1002 19:57:09.734919       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1002 19:57:09.734950       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 19:57:09.734971       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	I1002 19:57:09.735006       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 19:57:09.735269       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1002 19:57:09.735298       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 02 19:59:59 functional-460513 kubelet[7846]: E1002 19:59:59.099522    7846 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-85j8h" podUID="7456d5e9-502e-455b-9ac7-aed4d302fe22"
	Oct 02 20:00:10 functional-460513 kubelet[7846]: E1002 20:00:10.100345    7846 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="27481bf8-0750-44a3-93cc-d73e2662010e"
	Oct 02 20:00:11 functional-460513 kubelet[7846]: E1002 20:00:11.099641    7846 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-85j8h" podUID="7456d5e9-502e-455b-9ac7-aed4d302fe22"
	Oct 02 20:00:22 functional-460513 kubelet[7846]: E1002 20:00:22.106836    7846 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="27481bf8-0750-44a3-93cc-d73e2662010e"
	Oct 02 20:00:24 functional-460513 kubelet[7846]: E1002 20:00:24.099963    7846 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-85j8h" podUID="7456d5e9-502e-455b-9ac7-aed4d302fe22"
	Oct 02 20:00:36 functional-460513 kubelet[7846]: E1002 20:00:36.100158    7846 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="27481bf8-0750-44a3-93cc-d73e2662010e"
	Oct 02 20:00:39 functional-460513 kubelet[7846]: E1002 20:00:39.099624    7846 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-85j8h" podUID="7456d5e9-502e-455b-9ac7-aed4d302fe22"
	Oct 02 20:00:48 functional-460513 kubelet[7846]: E1002 20:00:48.475391    7846 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Oct 02 20:00:48 functional-460513 kubelet[7846]: E1002 20:00:48.475456    7846 kuberuntime_image.go:43] "Failed to pull image" err="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Oct 02 20:00:48 functional-460513 kubelet[7846]: E1002 20:00:48.475557    7846 kuberuntime_manager.go:1449] "Unhandled Error" err="container myfrontend start failed in pod sp-pod_default(27481bf8-0750-44a3-93cc-d73e2662010e): ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 02 20:00:48 functional-460513 kubelet[7846]: E1002 20:00:48.475680    7846 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="27481bf8-0750-44a3-93cc-d73e2662010e"
	Oct 02 20:00:52 functional-460513 kubelet[7846]: E1002 20:00:52.106565    7846 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-85j8h" podUID="7456d5e9-502e-455b-9ac7-aed4d302fe22"
	Oct 02 20:01:00 functional-460513 kubelet[7846]: E1002 20:01:00.099268    7846 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="27481bf8-0750-44a3-93cc-d73e2662010e"
	Oct 02 20:01:06 functional-460513 kubelet[7846]: E1002 20:01:06.339437    7846 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Oct 02 20:01:06 functional-460513 kubelet[7846]: E1002 20:01:06.339508    7846 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Oct 02 20:01:06 functional-460513 kubelet[7846]: E1002 20:01:06.339612    7846 kuberuntime_manager.go:1449] "Unhandled Error" err="container echo-server start failed in pod hello-node-connect-7d85dfc575-85j8h_default(7456d5e9-502e-455b-9ac7-aed4d302fe22): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 02 20:01:06 functional-460513 kubelet[7846]: E1002 20:01:06.339650    7846 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-85j8h" podUID="7456d5e9-502e-455b-9ac7-aed4d302fe22"
	Oct 02 20:01:13 functional-460513 kubelet[7846]: E1002 20:01:13.099449    7846 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="27481bf8-0750-44a3-93cc-d73e2662010e"
	Oct 02 20:01:20 functional-460513 kubelet[7846]: E1002 20:01:20.106967    7846 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-85j8h" podUID="7456d5e9-502e-455b-9ac7-aed4d302fe22"
	Oct 02 20:01:26 functional-460513 kubelet[7846]: E1002 20:01:26.100655    7846 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="27481bf8-0750-44a3-93cc-d73e2662010e"
	Oct 02 20:01:32 functional-460513 kubelet[7846]: E1002 20:01:32.099964    7846 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-85j8h" podUID="7456d5e9-502e-455b-9ac7-aed4d302fe22"
	Oct 02 20:01:40 functional-460513 kubelet[7846]: E1002 20:01:40.099664    7846 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="27481bf8-0750-44a3-93cc-d73e2662010e"
	Oct 02 20:01:45 functional-460513 kubelet[7846]: E1002 20:01:45.100063    7846 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-85j8h" podUID="7456d5e9-502e-455b-9ac7-aed4d302fe22"
	Oct 02 20:01:53 functional-460513 kubelet[7846]: E1002 20:01:53.100230    7846 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="27481bf8-0750-44a3-93cc-d73e2662010e"
	Oct 02 20:01:56 functional-460513 kubelet[7846]: E1002 20:01:56.100625    7846 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-85j8h" podUID="7456d5e9-502e-455b-9ac7-aed4d302fe22"
	
	
	==> storage-provisioner [33f4fa437242] <==
	W1002 20:01:36.056818       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:01:38.060262       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:01:38.067346       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:01:40.076321       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:01:40.082687       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:01:42.087671       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:01:42.100588       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:01:44.104872       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:01:44.110686       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:01:46.121954       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:01:46.128982       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:01:48.132531       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:01:48.140138       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:01:50.143568       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:01:50.148764       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:01:52.152248       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:01:52.159544       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:01:54.163382       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:01:54.168340       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:01:56.172550       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:01:56.177875       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:01:58.181376       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:01:58.186008       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:02:00.202008       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:02:00.250454       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [5e610b6f5c95] <==
	W1002 19:56:45.555569       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:56:45.562256       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:56:47.582004       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:56:47.589875       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:56:49.592960       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:56:49.598092       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:56:51.600865       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:56:51.608664       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:56:53.612538       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:56:53.618914       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:56:55.622089       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:56:55.627118       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:56:57.630513       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:56:57.635573       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:56:59.638282       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:56:59.643254       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:57:01.646302       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:57:01.653892       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:57:03.657134       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:57:03.662085       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:57:05.665042       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:57:05.669486       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:57:07.672971       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:57:07.679989       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	E1002 19:57:09.680844       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-460513 -n functional-460513
helpers_test.go:269: (dbg) Run:  kubectl --context functional-460513 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-node-connect-7d85dfc575-85j8h sp-pod
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-460513 describe pod hello-node-connect-7d85dfc575-85j8h sp-pod
helpers_test.go:290: (dbg) kubectl --context functional-460513 describe pod hello-node-connect-7d85dfc575-85j8h sp-pod:

                                                
                                                
-- stdout --
	Name:             hello-node-connect-7d85dfc575-85j8h
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-460513/192.168.49.2
	Start Time:       Thu, 02 Oct 2025 19:58:02 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:           10.244.0.9
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ps69t (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-ps69t:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  4m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-85j8h to functional-460513
	  Normal   Pulling    56s (x5 over 3m59s)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     56s (x5 over 3m59s)  kubelet            Failed to pull image "kicbase/echo-server": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     56s (x5 over 3m59s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    6s (x15 over 3m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     6s (x15 over 3m59s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-460513/192.168.49.2
	Start Time:       Thu, 02 Oct 2025 19:57:59 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8qj7g (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-8qj7g:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  4m3s                  default-scheduler  Successfully assigned default/sp-pod to functional-460513
	  Warning  Failed     2m45s (x3 over 4m2s)  kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    74s (x5 over 4m2s)    kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     74s (x5 over 4m2s)    kubelet            Error: ErrImagePull
	  Warning  Failed     74s (x2 over 3m27s)   kubelet            Failed to pull image "docker.io/nginx": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    9s (x15 over 4m1s)    kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     9s (x15 over 4m1s)    kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (248.65s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-460513 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-460513 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-s8zx4" [7076f721-7fec-48cb-b884-2ff8c9abbcd2] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E1002 20:05:32.271167  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/addons-660088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-460513 -n functional-460513
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-10-02 20:12:03.409429288 +0000 UTC m=+1497.824203860
functional_test.go:1460: (dbg) Run:  kubectl --context functional-460513 describe po hello-node-75c85bcc94-s8zx4 -n default
functional_test.go:1460: (dbg) kubectl --context functional-460513 describe po hello-node-75c85bcc94-s8zx4 -n default:
Name:             hello-node-75c85bcc94-s8zx4
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-460513/192.168.49.2
Start Time:       Thu, 02 Oct 2025 20:02:02 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.10
IPs:
IP:           10.244.0.10
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cg2d6 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-cg2d6:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-75c85bcc94-s8zx4 to functional-460513
Warning  Failed     8m34s (x3 over 10m)     kubelet            Failed to pull image "kicbase/echo-server": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    7m8s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m8s (x5 over 10m)      kubelet            Error: ErrImagePull
Warning  Failed     7m8s (x2 over 9m48s)    kubelet            Failed to pull image "kicbase/echo-server": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    4m45s (x22 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m45s (x22 over 9m59s)  kubelet            Error: ImagePullBackOff
functional_test.go:1460: (dbg) Run:  kubectl --context functional-460513 logs hello-node-75c85bcc94-s8zx4 -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-460513 logs hello-node-75c85bcc94-s8zx4 -n default: exit status 1 (102.196705ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-s8zx4" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-460513 logs hello-node-75c85bcc94-s8zx4 -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.75s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-460513 service --namespace=default --https --url hello-node: exit status 115 (397.679317ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:31764
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-460513 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-460513 service hello-node --url --format={{.IP}}: exit status 115 (418.740161ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-460513 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-460513 service hello-node --url: exit status 115 (406.791307ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:31764
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-460513 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31764
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.41s)

                                                
                                    

Test pass (314/347)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 18.43
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.09
9 TestDownloadOnly/v1.28.0/DeleteAll 0.22
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 21.98
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.09
18 TestDownloadOnly/v1.34.1/DeleteAll 0.22
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.61
22 TestOffline 87.46
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 163.87
29 TestAddons/serial/Volcano 42.86
31 TestAddons/serial/GCPAuth/Namespaces 0.19
32 TestAddons/serial/GCPAuth/FakeCredentials 9.91
35 TestAddons/parallel/Registry 16.53
36 TestAddons/parallel/RegistryCreds 0.69
37 TestAddons/parallel/Ingress 20.26
38 TestAddons/parallel/InspektorGadget 5.24
39 TestAddons/parallel/MetricsServer 6.84
41 TestAddons/parallel/CSI 62.18
42 TestAddons/parallel/Headlamp 17.72
43 TestAddons/parallel/CloudSpanner 6.59
44 TestAddons/parallel/LocalPath 53.89
45 TestAddons/parallel/NvidiaDevicePlugin 5.71
46 TestAddons/parallel/Yakd 11.88
48 TestAddons/StoppedEnableDisable 11.35
49 TestCertOptions 36.68
50 TestCertExpiration 273.96
51 TestDockerFlags 43.56
52 TestForceSystemdFlag 40.74
53 TestForceSystemdEnv 48.37
59 TestErrorSpam/setup 38.17
60 TestErrorSpam/start 0.83
61 TestErrorSpam/status 1.08
62 TestErrorSpam/pause 1.55
63 TestErrorSpam/unpause 1.72
64 TestErrorSpam/stop 11.16
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 79.86
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 49.24
71 TestFunctional/serial/KubeContext 0.06
72 TestFunctional/serial/KubectlGetPods 0.1
75 TestFunctional/serial/CacheCmd/cache/add_remote 2.95
76 TestFunctional/serial/CacheCmd/cache/add_local 1.08
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
78 TestFunctional/serial/CacheCmd/cache/list 0.06
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.32
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.59
81 TestFunctional/serial/CacheCmd/cache/delete 0.12
82 TestFunctional/serial/MinikubeKubectlCmd 0.15
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
84 TestFunctional/serial/ExtraConfig 59.15
85 TestFunctional/serial/ComponentHealth 0.11
86 TestFunctional/serial/LogsCmd 1.45
87 TestFunctional/serial/LogsFileCmd 1.3
88 TestFunctional/serial/InvalidService 4.86
90 TestFunctional/parallel/ConfigCmd 0.51
92 TestFunctional/parallel/DryRun 0.43
93 TestFunctional/parallel/InternationalLanguage 0.24
94 TestFunctional/parallel/StatusCmd 1.09
99 TestFunctional/parallel/AddonsCmd 0.15
102 TestFunctional/parallel/SSHCmd 0.73
103 TestFunctional/parallel/CpCmd 2.51
105 TestFunctional/parallel/FileSync 0.27
106 TestFunctional/parallel/CertSync 1.73
110 TestFunctional/parallel/NodeLabels 0.1
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.29
114 TestFunctional/parallel/License 0.31
116 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.69
117 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.5
120 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.09
121 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
125 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
127 TestFunctional/parallel/ProfileCmd/profile_not_create 0.45
128 TestFunctional/parallel/ProfileCmd/profile_list 0.42
129 TestFunctional/parallel/ProfileCmd/profile_json_output 0.44
130 TestFunctional/parallel/MountCmd/any-port 7.99
131 TestFunctional/parallel/MountCmd/specific-port 1.81
132 TestFunctional/parallel/MountCmd/VerifyCleanup 1.91
133 TestFunctional/parallel/ServiceCmd/List 1.32
134 TestFunctional/parallel/ServiceCmd/JSONOutput 1.32
138 TestFunctional/parallel/Version/short 0.06
139 TestFunctional/parallel/Version/components 1.04
140 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
141 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
142 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
143 TestFunctional/parallel/ImageCommands/ImageListYaml 0.21
144 TestFunctional/parallel/ImageCommands/ImageBuild 3.62
145 TestFunctional/parallel/ImageCommands/Setup 0.64
146 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.99
147 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.82
148 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.05
149 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.35
150 TestFunctional/parallel/ImageCommands/ImageRemove 0.45
151 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.69
152 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.38
153 TestFunctional/parallel/UpdateContextCmd/no_changes 0.16
154 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.17
155 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.16
156 TestFunctional/parallel/DockerEnv/bash 1.07
157 TestFunctional/delete_echo-server_images 0.04
158 TestFunctional/delete_my-image_image 0.02
159 TestFunctional/delete_minikube_cached_images 0.02
164 TestMultiControlPlane/serial/StartCluster 171.07
165 TestMultiControlPlane/serial/DeployApp 7.48
166 TestMultiControlPlane/serial/PingHostFromPods 1.78
167 TestMultiControlPlane/serial/AddWorkerNode 36.54
168 TestMultiControlPlane/serial/NodeLabels 0.11
169 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.22
170 TestMultiControlPlane/serial/CopyFile 21.02
171 TestMultiControlPlane/serial/StopSecondaryNode 11.92
172 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.81
173 TestMultiControlPlane/serial/RestartSecondaryNode 65.37
174 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.17
175 TestMultiControlPlane/serial/RestartClusterKeepsNodes 196.84
176 TestMultiControlPlane/serial/DeleteSecondaryNode 10.99
177 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.82
178 TestMultiControlPlane/serial/StopCluster 33.09
179 TestMultiControlPlane/serial/RestartCluster 111.77
180 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.79
181 TestMultiControlPlane/serial/AddSecondaryNode 55.83
182 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.16
185 TestImageBuild/serial/Setup 33.61
186 TestImageBuild/serial/NormalBuild 1.94
187 TestImageBuild/serial/BuildWithBuildArg 0.95
188 TestImageBuild/serial/BuildWithDockerIgnore 0.87
189 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.93
193 TestJSONOutput/start/Command 74.37
194 TestJSONOutput/start/Audit 0
196 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/pause/Command 0.68
200 TestJSONOutput/pause/Audit 0
202 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
205 TestJSONOutput/unpause/Command 0.6
206 TestJSONOutput/unpause/Audit 0
208 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
209 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
211 TestJSONOutput/stop/Command 5.85
212 TestJSONOutput/stop/Audit 0
214 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
215 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
216 TestErrorJSONOutput 0.24
218 TestKicCustomNetwork/create_custom_network 38.32
219 TestKicCustomNetwork/use_default_bridge_network 37.56
220 TestKicExistingNetwork 34.13
221 TestKicCustomSubnet 36.37
222 TestKicStaticIP 37.56
223 TestMainNoArgs 0.05
224 TestMinikubeProfile 74.68
227 TestMountStart/serial/StartWithMountFirst 11.01
228 TestMountStart/serial/VerifyMountFirst 0.27
229 TestMountStart/serial/StartWithMountSecond 8.96
230 TestMountStart/serial/VerifyMountSecond 0.26
231 TestMountStart/serial/DeleteFirst 1.48
232 TestMountStart/serial/VerifyMountPostDelete 0.26
233 TestMountStart/serial/Stop 1.25
234 TestMountStart/serial/RestartStopped 8.7
235 TestMountStart/serial/VerifyMountPostStop 0.27
238 TestMultiNode/serial/FreshStart2Nodes 90.61
239 TestMultiNode/serial/DeployApp2Nodes 5.38
240 TestMultiNode/serial/PingHostFrom2Pods 1.06
241 TestMultiNode/serial/AddNode 35.47
242 TestMultiNode/serial/MultiNodeLabels 0.08
243 TestMultiNode/serial/ProfileList 0.7
244 TestMultiNode/serial/CopyFile 10.89
245 TestMultiNode/serial/StopNode 2.35
246 TestMultiNode/serial/StartAfterStop 9.7
247 TestMultiNode/serial/RestartKeepsNodes 78.53
248 TestMultiNode/serial/DeleteNode 5.74
249 TestMultiNode/serial/StopMultiNode 21.9
250 TestMultiNode/serial/RestartMultiNode 52.22
251 TestMultiNode/serial/ValidateNameConflict 39.55
256 TestPreload 181.01
258 TestScheduledStopUnix 105.18
259 TestSkaffold 146.37
261 TestInsufficientStorage 14.46
262 TestRunningBinaryUpgrade 71.27
264 TestKubernetesUpgrade 390.81
265 TestMissingContainerUpgrade 144.85
267 TestPause/serial/Start 88.44
268 TestPause/serial/SecondStartNoReconfiguration 59.76
269 TestPause/serial/Pause 0.93
270 TestPause/serial/VerifyStatus 0.45
271 TestPause/serial/Unpause 0.79
272 TestPause/serial/PauseAgain 1.13
273 TestPause/serial/DeletePaused 2.77
274 TestPause/serial/VerifyDeletedResources 5.36
275 TestStoppedBinaryUpgrade/Setup 7.55
276 TestStoppedBinaryUpgrade/Upgrade 69.93
277 TestStoppedBinaryUpgrade/MinikubeLogs 1.11
286 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
287 TestNoKubernetes/serial/StartWithK8s 40.3
288 TestNoKubernetes/serial/StartWithStopK8s 18.41
289 TestNoKubernetes/serial/Start 11.58
290 TestNoKubernetes/serial/VerifyK8sNotRunning 0.36
291 TestNoKubernetes/serial/ProfileList 1.5
292 TestNoKubernetes/serial/Stop 1.31
293 TestNoKubernetes/serial/StartNoArgs 7.95
294 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.26
307 TestStartStop/group/old-k8s-version/serial/FirstStart 49.39
308 TestStartStop/group/old-k8s-version/serial/DeployApp 9.44
309 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.18
310 TestStartStop/group/old-k8s-version/serial/Stop 10.98
311 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.22
312 TestStartStop/group/old-k8s-version/serial/SecondStart 29.11
313 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 12.01
314 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
315 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.45
316 TestStartStop/group/old-k8s-version/serial/Pause 3.22
318 TestStartStop/group/no-preload/serial/FirstStart 83.04
320 TestStartStop/group/embed-certs/serial/FirstStart 81.41
321 TestStartStop/group/no-preload/serial/DeployApp 10.49
322 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.46
323 TestStartStop/group/no-preload/serial/Stop 11.3
324 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.29
325 TestStartStop/group/no-preload/serial/SecondStart 53.36
326 TestStartStop/group/embed-certs/serial/DeployApp 8.4
327 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.1
328 TestStartStop/group/embed-certs/serial/Stop 10.96
329 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
330 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
331 TestStartStop/group/embed-certs/serial/SecondStart 57.98
332 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.13
333 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.29
334 TestStartStop/group/no-preload/serial/Pause 5.27
336 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 75.19
337 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
338 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.11
339 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
340 TestStartStop/group/embed-certs/serial/Pause 3.32
342 TestStartStop/group/newest-cni/serial/FirstStart 40.72
343 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.53
344 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.5
345 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.28
346 TestStartStop/group/newest-cni/serial/DeployApp 0
347 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.72
348 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.32
349 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 31.01
350 TestStartStop/group/newest-cni/serial/Stop 11.35
351 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.38
352 TestStartStop/group/newest-cni/serial/SecondStart 23.62
353 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 9
354 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
355 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
356 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.29
357 TestStartStop/group/newest-cni/serial/Pause 3.38
358 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.13
359 TestNetworkPlugins/group/auto/Start 60.58
360 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.26
361 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.58
362 TestNetworkPlugins/group/kindnet/Start 65.22
363 TestNetworkPlugins/group/auto/KubeletFlags 0.33
364 TestNetworkPlugins/group/auto/NetCatPod 10.32
365 TestNetworkPlugins/group/auto/DNS 0.33
366 TestNetworkPlugins/group/auto/Localhost 0.18
367 TestNetworkPlugins/group/auto/HairPin 0.27
368 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
369 TestNetworkPlugins/group/kindnet/KubeletFlags 0.48
370 TestNetworkPlugins/group/kindnet/NetCatPod 10.37
371 TestNetworkPlugins/group/kindnet/DNS 0.36
372 TestNetworkPlugins/group/kindnet/Localhost 0.24
373 TestNetworkPlugins/group/kindnet/HairPin 0.24
374 TestNetworkPlugins/group/calico/Start 77.46
375 TestNetworkPlugins/group/custom-flannel/Start 65.67
376 TestNetworkPlugins/group/calico/ControllerPod 6.01
377 TestNetworkPlugins/group/calico/KubeletFlags 0.38
378 TestNetworkPlugins/group/calico/NetCatPod 11.29
379 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.31
380 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.3
381 TestNetworkPlugins/group/calico/DNS 0.21
382 TestNetworkPlugins/group/calico/Localhost 0.18
383 TestNetworkPlugins/group/calico/HairPin 0.17
384 TestNetworkPlugins/group/custom-flannel/DNS 0.21
385 TestNetworkPlugins/group/custom-flannel/Localhost 0.17
386 TestNetworkPlugins/group/custom-flannel/HairPin 0.17
387 TestNetworkPlugins/group/false/Start 81.31
388 TestNetworkPlugins/group/enable-default-cni/Start 89.07
389 TestNetworkPlugins/group/false/KubeletFlags 0.3
390 TestNetworkPlugins/group/false/NetCatPod 10.28
391 TestNetworkPlugins/group/false/DNS 0.19
392 TestNetworkPlugins/group/false/Localhost 0.16
393 TestNetworkPlugins/group/false/HairPin 0.17
394 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.3
395 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.27
396 TestNetworkPlugins/group/enable-default-cni/DNS 0.26
397 TestNetworkPlugins/group/enable-default-cni/Localhost 0.24
398 TestNetworkPlugins/group/enable-default-cni/HairPin 0.23
399 TestNetworkPlugins/group/flannel/Start 57.74
400 TestNetworkPlugins/group/bridge/Start 80.11
401 TestNetworkPlugins/group/flannel/ControllerPod 6.01
402 TestNetworkPlugins/group/flannel/KubeletFlags 0.41
403 TestNetworkPlugins/group/flannel/NetCatPod 10.45
404 TestNetworkPlugins/group/flannel/DNS 0.25
405 TestNetworkPlugins/group/flannel/Localhost 0.17
406 TestNetworkPlugins/group/flannel/HairPin 0.27
407 TestNetworkPlugins/group/kubenet/Start 76.67
408 TestNetworkPlugins/group/bridge/KubeletFlags 0.37
409 TestNetworkPlugins/group/bridge/NetCatPod 11.34
410 TestNetworkPlugins/group/bridge/DNS 0.25
411 TestNetworkPlugins/group/bridge/Localhost 0.23
412 TestNetworkPlugins/group/bridge/HairPin 0.2
413 TestNetworkPlugins/group/kubenet/KubeletFlags 0.32
414 TestNetworkPlugins/group/kubenet/NetCatPod 10.3
415 TestNetworkPlugins/group/kubenet/DNS 0.19
416 TestNetworkPlugins/group/kubenet/Localhost 0.16
417 TestNetworkPlugins/group/kubenet/HairPin 0.15
x
+
TestDownloadOnly/v1.28.0/json-events (18.43s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-304074 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-304074 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (18.433150972s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (18.43s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1002 19:47:24.059871  882884 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime docker
I1002 19:47:24.059960  882884 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-881023/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-304074
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-304074: exit status 85 (93.597017ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                     ARGS                                                                                      │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-304074 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker │ download-only-304074 │ jenkins │ v1.37.0 │ 02 Oct 25 19:47 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 19:47:05
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 19:47:05.674826  882889 out.go:360] Setting OutFile to fd 1 ...
	I1002 19:47:05.674973  882889 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 19:47:05.674985  882889 out.go:374] Setting ErrFile to fd 2...
	I1002 19:47:05.674991  882889 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 19:47:05.675258  882889 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-881023/.minikube/bin
	W1002 19:47:05.675405  882889 root.go:315] Error reading config file at /home/jenkins/minikube-integration/21683-881023/.minikube/config/config.json: open /home/jenkins/minikube-integration/21683-881023/.minikube/config/config.json: no such file or directory
	I1002 19:47:05.675819  882889 out.go:368] Setting JSON to true
	I1002 19:47:05.676677  882889 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":16164,"bootTime":1759418262,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1002 19:47:05.676746  882889 start.go:140] virtualization:  
	I1002 19:47:05.681366  882889 out.go:99] [download-only-304074] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1002 19:47:05.681617  882889 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21683-881023/.minikube/cache/preloaded-tarball: no such file or directory
	I1002 19:47:05.681743  882889 notify.go:221] Checking for updates...
	I1002 19:47:05.685659  882889 out.go:171] MINIKUBE_LOCATION=21683
	I1002 19:47:05.688739  882889 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 19:47:05.691780  882889 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21683-881023/kubeconfig
	I1002 19:47:05.694686  882889 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-881023/.minikube
	I1002 19:47:05.697679  882889 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1002 19:47:05.703247  882889 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1002 19:47:05.703528  882889 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 19:47:05.726728  882889 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1002 19:47:05.726866  882889 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 19:47:05.787757  882889 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:63 SystemTime:2025-10-02 19:47:05.778675199 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 19:47:05.787871  882889 docker.go:319] overlay module found
	I1002 19:47:05.790797  882889 out.go:99] Using the docker driver based on user configuration
	I1002 19:47:05.790853  882889 start.go:306] selected driver: docker
	I1002 19:47:05.790870  882889 start.go:936] validating driver "docker" against <nil>
	I1002 19:47:05.790978  882889 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 19:47:05.851174  882889 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:63 SystemTime:2025-10-02 19:47:05.84239383 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 19:47:05.851329  882889 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 19:47:05.851599  882889 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1002 19:47:05.851755  882889 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1002 19:47:05.854938  882889 out.go:171] Using Docker driver with root privileges
	I1002 19:47:05.857930  882889 cni.go:84] Creating CNI manager for ""
	I1002 19:47:05.858020  882889 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1002 19:47:05.858036  882889 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1002 19:47:05.858127  882889 start.go:350] cluster config:
	{Name:download-only-304074 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-304074 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 19:47:05.861289  882889 out.go:99] Starting "download-only-304074" primary control-plane node in "download-only-304074" cluster
	I1002 19:47:05.861308  882889 cache.go:124] Beginning downloading kic base image for docker with docker
	I1002 19:47:05.864315  882889 out.go:99] Pulling base image v0.0.48-1759382731-21643 ...
	I1002 19:47:05.864355  882889 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I1002 19:47:05.864512  882889 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 19:47:05.880415  882889 cache.go:153] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d to local cache
	I1002 19:47:05.880580  882889 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory
	I1002 19:47:05.880669  882889 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d to local cache
	I1002 19:47:05.921876  882889 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4
	I1002 19:47:05.921912  882889 cache.go:59] Caching tarball of preloaded images
	I1002 19:47:05.922080  882889 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I1002 19:47:05.925371  882889 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1002 19:47:05.925396  882889 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4 from gcs api...
	I1002 19:47:06.015617  882889 preload.go:290] Got checksum from GCS API "002a73d62a3b066a08573cf3da2c8cb4"
	I1002 19:47:06.015751  882889 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4?checksum=md5:002a73d62a3b066a08573cf3da2c8cb4 -> /home/jenkins/minikube-integration/21683-881023/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4
	I1002 19:47:15.078371  882889 cache.go:156] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d as a tarball
	
	
	* The control-plane node download-only-304074 host does not exist
	  To start a cluster, run: "minikube start -p download-only-304074"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-304074
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (21.98s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-902229 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-902229 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=docker --driver=docker  --container-runtime=docker: (21.983087913s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (21.98s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1002 19:47:46.491326  882884 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
I1002 19:47:46.491364  882884 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-881023/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-902229
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-902229: exit status 85 (84.697676ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                     ARGS                                                                                      │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-304074 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker │ download-only-304074 │ jenkins │ v1.37.0 │ 02 Oct 25 19:47 UTC │                     │
	│ delete  │ --all                                                                                                                                                                         │ minikube             │ jenkins │ v1.37.0 │ 02 Oct 25 19:47 UTC │ 02 Oct 25 19:47 UTC │
	│ delete  │ -p download-only-304074                                                                                                                                                       │ download-only-304074 │ jenkins │ v1.37.0 │ 02 Oct 25 19:47 UTC │ 02 Oct 25 19:47 UTC │
	│ start   │ -o=json --download-only -p download-only-902229 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=docker --driver=docker  --container-runtime=docker │ download-only-902229 │ jenkins │ v1.37.0 │ 02 Oct 25 19:47 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 19:47:24
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 19:47:24.549430  883086 out.go:360] Setting OutFile to fd 1 ...
	I1002 19:47:24.549548  883086 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 19:47:24.549560  883086 out.go:374] Setting ErrFile to fd 2...
	I1002 19:47:24.549566  883086 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 19:47:24.549834  883086 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-881023/.minikube/bin
	I1002 19:47:24.550244  883086 out.go:368] Setting JSON to true
	I1002 19:47:24.551046  883086 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":16183,"bootTime":1759418262,"procs":144,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1002 19:47:24.551122  883086 start.go:140] virtualization:  
	I1002 19:47:24.554673  883086 out.go:99] [download-only-902229] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 19:47:24.554990  883086 notify.go:221] Checking for updates...
	I1002 19:47:24.558896  883086 out.go:171] MINIKUBE_LOCATION=21683
	I1002 19:47:24.561934  883086 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 19:47:24.564927  883086 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21683-881023/kubeconfig
	I1002 19:47:24.568008  883086 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-881023/.minikube
	I1002 19:47:24.571016  883086 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1002 19:47:24.576614  883086 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1002 19:47:24.576877  883086 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 19:47:24.603407  883086 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1002 19:47:24.603529  883086 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 19:47:24.662648  883086 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:49 SystemTime:2025-10-02 19:47:24.652763829 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 19:47:24.662754  883086 docker.go:319] overlay module found
	I1002 19:47:24.665800  883086 out.go:99] Using the docker driver based on user configuration
	I1002 19:47:24.665860  883086 start.go:306] selected driver: docker
	I1002 19:47:24.665876  883086 start.go:936] validating driver "docker" against <nil>
	I1002 19:47:24.665974  883086 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 19:47:24.721820  883086 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:49 SystemTime:2025-10-02 19:47:24.712394996 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 19:47:24.722007  883086 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 19:47:24.722314  883086 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1002 19:47:24.722480  883086 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1002 19:47:24.725529  883086 out.go:171] Using Docker driver with root privileges
	I1002 19:47:24.728485  883086 cni.go:84] Creating CNI manager for ""
	I1002 19:47:24.728574  883086 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1002 19:47:24.728616  883086 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1002 19:47:24.728710  883086 start.go:350] cluster config:
	{Name:download-only-902229 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-902229 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 19:47:24.731580  883086 out.go:99] Starting "download-only-902229" primary control-plane node in "download-only-902229" cluster
	I1002 19:47:24.731602  883086 cache.go:124] Beginning downloading kic base image for docker with docker
	I1002 19:47:24.734422  883086 out.go:99] Pulling base image v0.0.48-1759382731-21643 ...
	I1002 19:47:24.734468  883086 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1002 19:47:24.734560  883086 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 19:47:24.750349  883086 cache.go:153] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d to local cache
	I1002 19:47:24.750480  883086 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory
	I1002 19:47:24.750503  883086 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory, skipping pull
	I1002 19:47:24.750511  883086 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in cache, skipping pull
	I1002 19:47:24.750519  883086 cache.go:156] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d as a tarball
	I1002 19:47:24.802776  883086 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-arm64.tar.lz4
	I1002 19:47:24.802801  883086 cache.go:59] Caching tarball of preloaded images
	I1002 19:47:24.802961  883086 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1002 19:47:24.806061  883086 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1002 19:47:24.806089  883086 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.34.1-docker-overlay2-arm64.tar.lz4 from gcs api...
	I1002 19:47:24.890876  883086 preload.go:290] Got checksum from GCS API "0ed426d75a878e5f4b25fef8ce404e82"
	I1002 19:47:24.890931  883086 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-arm64.tar.lz4?checksum=md5:0ed426d75a878e5f4b25fef8ce404e82 -> /home/jenkins/minikube-integration/21683-881023/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-902229 host does not exist
	  To start a cluster, run: "minikube start -p download-only-902229"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-902229
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.61s)

                                                
                                                
=== RUN   TestBinaryMirror
I1002 19:47:47.672684  882884 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-267196 --alsologtostderr --binary-mirror http://127.0.0.1:44603 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-267196" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-267196
--- PASS: TestBinaryMirror (0.61s)

                                                
                                    
x
+
TestOffline (87.46s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-arm64 start -p offline-docker-388752 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-arm64 start -p offline-docker-388752 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=docker: (1m25.27796147s)
helpers_test.go:175: Cleaning up "offline-docker-388752" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p offline-docker-388752
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p offline-docker-388752: (2.186464603s)
--- PASS: TestOffline (87.46s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-660088
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-660088: exit status 85 (69.656569ms)

                                                
                                                
-- stdout --
	* Profile "addons-660088" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-660088"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-660088
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-660088: exit status 85 (74.061034ms)

                                                
                                                
-- stdout --
	* Profile "addons-660088" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-660088"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (163.87s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-660088 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-660088 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m43.868293883s)
--- PASS: TestAddons/Setup (163.87s)

                                                
                                    
x
+
TestAddons/serial/Volcano (42.86s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:868: volcano-scheduler stabilized in 70.024449ms
addons_test.go:884: volcano-controller stabilized in 71.288759ms
addons_test.go:876: volcano-admission stabilized in 71.431678ms
addons_test.go:890: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-scheduler-76c996c8bf-vb796" [ddbfcedd-1779-48df-96fc-2f611534f320] Running
addons_test.go:890: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003626285s
addons_test.go:894: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-admission-6c447bd768-rznhr" [2379403b-2fff-41f4-b729-f12806a0cd75] Running
addons_test.go:894: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003447136s
addons_test.go:898: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-controllers-6fd4f85cb8-gk8h9" [91e1c934-566c-4e7f-aa5d-6b11d590caa9] Running
addons_test.go:898: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.00319357s
addons_test.go:903: (dbg) Run:  kubectl --context addons-660088 delete -n volcano-system job volcano-admission-init
addons_test.go:909: (dbg) Run:  kubectl --context addons-660088 create -f testdata/vcjob.yaml
addons_test.go:917: (dbg) Run:  kubectl --context addons-660088 get vcjob -n my-volcano
addons_test.go:935: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:352: "test-job-nginx-0" [afda946e-4844-44ad-a0d9-f7fee28e6b7e] Pending
helpers_test.go:352: "test-job-nginx-0" [afda946e-4844-44ad-a0d9-f7fee28e6b7e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "test-job-nginx-0" [afda946e-4844-44ad-a0d9-f7fee28e6b7e] Running
addons_test.go:935: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 14.003597135s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-660088 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-660088 addons disable volcano --alsologtostderr -v=1: (12.124306499s)
--- PASS: TestAddons/serial/Volcano (42.86s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-660088 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-660088 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.91s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-660088 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-660088 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [6b28573e-83c4-487c-b1b8-67f2904e8d6e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [6b28573e-83c4-487c-b1b8-67f2904e8d6e] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.004170559s
addons_test.go:694: (dbg) Run:  kubectl --context addons-660088 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-660088 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-660088 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-660088 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.91s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.53s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 5.166763ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-hz7d2" [e4884f07-d885-43ef-85f7-8419a6fc1ef4] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.009071154s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-jnbxh" [51430454-bf94-4b4e-bbfe-ce7ce6dcb319] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.00527603s
addons_test.go:392: (dbg) Run:  kubectl --context addons-660088 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-660088 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-660088 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.505831642s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-660088 ip
2025/10/02 19:51:50 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-660088 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.53s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.69s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 4.772932ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-660088
addons_test.go:332: (dbg) Run:  kubectl --context addons-660088 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-660088 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.69s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-660088 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-660088 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-660088 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [264b6892-664f-451c-8b74-72bb103ef685] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [264b6892-664f-451c-8b74-72bb103ef685] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003292368s
I1002 19:53:06.387477  882884 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-660088 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-660088 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-660088 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-660088 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-660088 addons disable ingress-dns --alsologtostderr -v=1: (1.642871387s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-660088 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-660088 addons disable ingress --alsologtostderr -v=1: (7.943897837s)
--- PASS: TestAddons/parallel/Ingress (20.26s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.24s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-tnzdp" [e809fd88-228c-4e2c-a008-03ab566f22c9] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004230373s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-660088 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (5.24s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.84s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 19.197072ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-vzhpr" [bb9c16ef-183e-4883-addb-ec88f5670fe8] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.00346212s
addons_test.go:463: (dbg) Run:  kubectl --context addons-660088 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-660088 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.84s)

                                                
                                    
x
+
TestAddons/parallel/CSI (62.18s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1002 19:52:16.191044  882884 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1002 19:52:16.195334  882884 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1002 19:52:16.195369  882884 kapi.go:107] duration metric: took 7.400399ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 7.412223ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-660088 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-660088 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-660088 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-660088 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-660088 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-660088 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-660088 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-660088 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-660088 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-660088 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-660088 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-660088 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-660088 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-660088 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-660088 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-660088 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-660088 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-660088 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-660088 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-660088 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-660088 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-660088 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [c8d71b53-c177-4cdc-9394-0d5578ff604f] Pending
helpers_test.go:352: "task-pv-pod" [c8d71b53-c177-4cdc-9394-0d5578ff604f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [c8d71b53-c177-4cdc-9394-0d5578ff604f] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.003704128s
addons_test.go:572: (dbg) Run:  kubectl --context addons-660088 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-660088 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-660088 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-660088 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-660088 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-660088 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-660088 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-660088 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-660088 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-660088 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-660088 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-660088 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-660088 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-660088 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-660088 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-660088 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-660088 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-660088 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-660088 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-660088 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-660088 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-660088 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-660088 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-660088 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-660088 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [ed942cd0-f185-47ba-88ca-a02774092fdc] Pending
helpers_test.go:352: "task-pv-pod-restore" [ed942cd0-f185-47ba-88ca-a02774092fdc] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [ed942cd0-f185-47ba-88ca-a02774092fdc] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004811764s
addons_test.go:614: (dbg) Run:  kubectl --context addons-660088 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-660088 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-660088 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-660088 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-660088 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-660088 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.892875389s)
--- PASS: TestAddons/parallel/CSI (62.18s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.72s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-660088 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-85f8f8dc54-lrqcs" [1ff1ed94-b064-4649-aa22-df1a0e172427] Pending
helpers_test.go:352: "headlamp-85f8f8dc54-lrqcs" [1ff1ed94-b064-4649-aa22-df1a0e172427] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-85f8f8dc54-lrqcs" [1ff1ed94-b064-4649-aa22-df1a0e172427] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-85f8f8dc54-lrqcs" [1ff1ed94-b064-4649-aa22-df1a0e172427] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.0042826s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-660088 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-660088 addons disable headlamp --alsologtostderr -v=1: (5.756761587s)
--- PASS: TestAddons/parallel/Headlamp (17.72s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.59s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-85f6b7fc65-smcnz" [26db6d57-df55-45db-9559-fe0342205ceb] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003841631s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-660088 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.59s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.89s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-660088 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-660088 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-660088 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-660088 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-660088 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-660088 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-660088 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-660088 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [1415ffaf-3999-47bb-a4fa-64a21547e0ed] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [1415ffaf-3999-47bb-a4fa-64a21547e0ed] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [1415ffaf-3999-47bb-a4fa-64a21547e0ed] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.02264895s
addons_test.go:967: (dbg) Run:  kubectl --context addons-660088 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-660088 ssh "cat /opt/local-path-provisioner/pvc-83b018dc-6e55-49ab-bac9-f5ee8c0c088c_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-660088 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-660088 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-660088 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-660088 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.162593267s)
--- PASS: TestAddons/parallel/LocalPath (53.89s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.71s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-j9mhb" [694b409a-fa8b-42d7-8f1e-8542c41a1656] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.010092566s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-660088 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.71s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.88s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-z6mhj" [b7c0f21b-b7a1-44b6-b2b6-2aaee693f084] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003656242s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-660088 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-660088 addons disable yakd --alsologtostderr -v=1: (5.879202452s)
--- PASS: TestAddons/parallel/Yakd (11.88s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.35s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-660088
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-660088: (11.073650002s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-660088
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-660088
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-660088
--- PASS: TestAddons/StoppedEnableDisable (11.35s)

                                                
                                    
x
+
TestCertOptions (36.68s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-403800 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-403800 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (33.535018125s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-403800 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-403800 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-403800 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-403800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-403800
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-403800: (2.265237591s)
--- PASS: TestCertOptions (36.68s)

                                                
                                    
x
+
TestCertExpiration (273.96s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-569250 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=docker
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-569250 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=docker: (47.17365778s)
E1002 20:56:05.340445  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/skaffold-796275/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-569250 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=docker
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-569250 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (44.372651308s)
helpers_test.go:175: Cleaning up "cert-expiration-569250" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-569250
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-569250: (2.408739513s)
--- PASS: TestCertExpiration (273.96s)

                                                
                                    
x
+
TestDockerFlags (43.56s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-arm64 start -p docker-flags-445578 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E1002 20:55:32.271516  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/addons-660088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:55:37.629792  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/skaffold-796275/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:51: (dbg) Done: out/minikube-linux-arm64 start -p docker-flags-445578 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (40.590462439s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-445578 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-445578 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-445578" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-flags-445578
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-flags-445578: (2.280575991s)
--- PASS: TestDockerFlags (43.56s)

                                                
                                    
x
+
TestForceSystemdFlag (40.74s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-335761 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E1002 20:52:53.629942  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/functional-460513/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-335761 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (38.217962792s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-335761 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-335761" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-335761
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-335761: (2.106608798s)
--- PASS: TestForceSystemdFlag (40.74s)

                                                
                                    
x
+
TestForceSystemdEnv (48.37s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-905886 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-905886 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (45.670427289s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-905886 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-905886" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-905886
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-905886: (2.314642252s)
--- PASS: TestForceSystemdEnv (48.37s)

                                                
                                    
x
+
TestErrorSpam/setup (38.17s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-459508 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-459508 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-459508 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-459508 --driver=docker  --container-runtime=docker: (38.171064978s)
--- PASS: TestErrorSpam/setup (38.17s)

                                                
                                    
x
+
TestErrorSpam/start (0.83s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-459508 --log_dir /tmp/nospam-459508 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-459508 --log_dir /tmp/nospam-459508 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-459508 --log_dir /tmp/nospam-459508 start --dry-run
--- PASS: TestErrorSpam/start (0.83s)

                                                
                                    
x
+
TestErrorSpam/status (1.08s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-459508 --log_dir /tmp/nospam-459508 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-459508 --log_dir /tmp/nospam-459508 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-459508 --log_dir /tmp/nospam-459508 status
--- PASS: TestErrorSpam/status (1.08s)

                                                
                                    
x
+
TestErrorSpam/pause (1.55s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-459508 --log_dir /tmp/nospam-459508 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-459508 --log_dir /tmp/nospam-459508 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-459508 --log_dir /tmp/nospam-459508 pause
--- PASS: TestErrorSpam/pause (1.55s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.72s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-459508 --log_dir /tmp/nospam-459508 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-459508 --log_dir /tmp/nospam-459508 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-459508 --log_dir /tmp/nospam-459508 unpause
--- PASS: TestErrorSpam/unpause (1.72s)

                                                
                                    
x
+
TestErrorSpam/stop (11.16s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-459508 --log_dir /tmp/nospam-459508 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-459508 --log_dir /tmp/nospam-459508 stop: (10.949192224s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-459508 --log_dir /tmp/nospam-459508 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-459508 --log_dir /tmp/nospam-459508 stop
--- PASS: TestErrorSpam/stop (11.16s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21683-881023/.minikube/files/etc/test/nested/copy/882884/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (79.86s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-460513 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
E1002 19:55:32.278270  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/addons-660088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 19:55:32.284594  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/addons-660088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 19:55:32.296005  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/addons-660088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 19:55:32.317358  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/addons-660088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 19:55:32.358809  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/addons-660088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 19:55:32.440359  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/addons-660088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 19:55:32.601970  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/addons-660088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 19:55:32.923666  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/addons-660088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 19:55:33.565725  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/addons-660088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 19:55:34.847367  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/addons-660088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 19:55:37.410124  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/addons-660088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 19:55:42.532203  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/addons-660088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-460513 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (1m19.858594279s)
--- PASS: TestFunctional/serial/StartWithProxy (79.86s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (49.24s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1002 19:55:48.956583  882884 config.go:182] Loaded profile config "functional-460513": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-460513 --alsologtostderr -v=8
E1002 19:55:52.774115  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/addons-660088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 19:56:13.255994  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/addons-660088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-460513 --alsologtostderr -v=8: (49.234797319s)
functional_test.go:678: soft start took 49.23660802s for "functional-460513" cluster.
I1002 19:56:38.191831  882884 config.go:182] Loaded profile config "functional-460513": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (49.24s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-460513 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.95s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-460513 cache add registry.k8s.io/pause:3.1: (1.029682255s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-460513 cache add registry.k8s.io/pause:3.3: (1.046836571s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.95s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-460513 /tmp/TestFunctionalserialCacheCmdcacheadd_local3271115309/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 cache add minikube-local-cache-test:functional-460513
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 cache delete minikube-local-cache-test:functional-460513
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-460513
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.59s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-460513 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (286.514728ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.59s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 kubectl -- --context functional-460513 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-460513 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (59.15s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-460513 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1002 19:56:54.218435  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/addons-660088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-460513 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (59.154330124s)
functional_test.go:776: restart took 59.154424795s for "functional-460513" cluster.
I1002 19:57:43.981252  882884 config.go:182] Loaded profile config "functional-460513": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (59.15s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-460513 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.45s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-460513 logs: (1.45193438s)
--- PASS: TestFunctional/serial/LogsCmd (1.45s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.3s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 logs --file /tmp/TestFunctionalserialLogsFileCmd3183548726/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-460513 logs --file /tmp/TestFunctionalserialLogsFileCmd3183548726/001/logs.txt: (1.293296683s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.30s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.86s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-460513 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-460513
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-460513: exit status 115 (401.061604ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30515 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-460513 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-460513 delete -f testdata/invalidsvc.yaml: (1.206543942s)
--- PASS: TestFunctional/serial/InvalidService (4.86s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-460513 config get cpus: exit status 14 (87.581377ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-460513 config get cpus: exit status 14 (98.738236ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-460513 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-460513 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (182.764002ms)

                                                
                                                
-- stdout --
	* [functional-460513] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-881023/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-881023/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 20:08:20.175719  926432 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:08:20.175860  926432 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:08:20.175871  926432 out.go:374] Setting ErrFile to fd 2...
	I1002 20:08:20.175876  926432 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:08:20.176140  926432 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-881023/.minikube/bin
	I1002 20:08:20.176521  926432 out.go:368] Setting JSON to false
	I1002 20:08:20.177696  926432 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":17439,"bootTime":1759418262,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1002 20:08:20.177775  926432 start.go:140] virtualization:  
	I1002 20:08:20.181142  926432 out.go:179] * [functional-460513] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 20:08:20.184287  926432 notify.go:221] Checking for updates...
	I1002 20:08:20.187646  926432 out.go:179]   - MINIKUBE_LOCATION=21683
	I1002 20:08:20.191137  926432 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:08:20.193999  926432 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-881023/kubeconfig
	I1002 20:08:20.197821  926432 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-881023/.minikube
	I1002 20:08:20.200756  926432 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 20:08:20.203802  926432 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 20:08:20.207052  926432 config.go:182] Loaded profile config "functional-460513": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1002 20:08:20.207700  926432 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 20:08:20.231054  926432 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1002 20:08:20.231187  926432 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:08:20.290925  926432 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 20:08:20.281501685 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 20:08:20.291117  926432 docker.go:319] overlay module found
	I1002 20:08:20.294187  926432 out.go:179] * Using the docker driver based on existing profile
	I1002 20:08:20.297274  926432 start.go:306] selected driver: docker
	I1002 20:08:20.297306  926432 start.go:936] validating driver "docker" against &{Name:functional-460513 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-460513 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:08:20.297414  926432 start.go:947] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 20:08:20.301075  926432 out.go:203] 
	W1002 20:08:20.304022  926432 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1002 20:08:20.306966  926432 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-460513 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-460513 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-460513 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (242.509308ms)

                                                
                                                
-- stdout --
	* [functional-460513] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-881023/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-881023/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 20:08:19.949631  926388 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:08:19.949817  926388 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:08:19.949829  926388 out.go:374] Setting ErrFile to fd 2...
	I1002 20:08:19.949835  926388 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:08:19.951622  926388 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-881023/.minikube/bin
	I1002 20:08:19.952064  926388 out.go:368] Setting JSON to false
	I1002 20:08:19.952986  926388 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":17438,"bootTime":1759418262,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1002 20:08:19.953065  926388 start.go:140] virtualization:  
	I1002 20:08:19.958799  926388 out.go:179] * [functional-460513] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1002 20:08:19.962031  926388 out.go:179]   - MINIKUBE_LOCATION=21683
	I1002 20:08:19.962051  926388 notify.go:221] Checking for updates...
	I1002 20:08:19.968175  926388 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:08:19.971073  926388 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-881023/kubeconfig
	I1002 20:08:19.973960  926388 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-881023/.minikube
	I1002 20:08:19.977024  926388 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 20:08:19.979975  926388 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 20:08:19.983549  926388 config.go:182] Loaded profile config "functional-460513": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1002 20:08:19.984130  926388 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 20:08:20.011079  926388 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1002 20:08:20.011347  926388 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:08:20.107362  926388 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 20:08:20.093002976 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 20:08:20.107556  926388 docker.go:319] overlay module found
	I1002 20:08:20.111115  926388 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1002 20:08:20.114052  926388 start.go:306] selected driver: docker
	I1002 20:08:20.114079  926388 start.go:936] validating driver "docker" against &{Name:functional-460513 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-460513 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:08:20.114173  926388 start.go:947] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 20:08:20.117787  926388 out.go:203] 
	W1002 20:08:20.120939  926388 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1002 20:08:20.123856  926388 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 ssh -n functional-460513 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 cp functional-460513:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd4024348198/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 ssh -n functional-460513 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 ssh -n functional-460513 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.51s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/882884/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 ssh "sudo cat /etc/test/nested/copy/882884/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/882884.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 ssh "sudo cat /etc/ssl/certs/882884.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/882884.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 ssh "sudo cat /usr/share/ca-certificates/882884.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/8828842.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 ssh "sudo cat /etc/ssl/certs/8828842.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/8828842.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 ssh "sudo cat /usr/share/ca-certificates/8828842.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.73s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-460513 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-460513 ssh "sudo systemctl is-active crio": exit status 1 (288.269381ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-460513 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-460513 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-460513 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 920444: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-460513 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-460513 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-460513 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [e860f043-fa1a-4cc2-8db9-9e344d0bfe60] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [e860f043-fa1a-4cc2-8db9-9e344d0bfe60] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.003978811s
I1002 19:58:02.129602  882884 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.50s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-460513 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.111.89.238 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-460513 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "366.946544ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "51.860273ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "380.0448ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "59.785568ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-460513 /tmp/TestFunctionalparallelMountCmdany-port2180119949/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1759435687082781856" to /tmp/TestFunctionalparallelMountCmdany-port2180119949/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1759435687082781856" to /tmp/TestFunctionalparallelMountCmdany-port2180119949/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1759435687082781856" to /tmp/TestFunctionalparallelMountCmdany-port2180119949/001/test-1759435687082781856
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-460513 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (344.503665ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1002 20:08:07.428217  882884 retry.go:31] will retry after 552.522902ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  2 20:08 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  2 20:08 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  2 20:08 test-1759435687082781856
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 ssh cat /mount-9p/test-1759435687082781856
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-460513 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [506b9e6a-ee66-4311-8a18-320720d85d0d] Pending
helpers_test.go:352: "busybox-mount" [506b9e6a-ee66-4311-8a18-320720d85d0d] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [506b9e6a-ee66-4311-8a18-320720d85d0d] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [506b9e6a-ee66-4311-8a18-320720d85d0d] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003881656s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-460513 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-460513 /tmp/TestFunctionalparallelMountCmdany-port2180119949/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.99s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-460513 /tmp/TestFunctionalparallelMountCmdspecific-port4193474993/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-460513 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (337.380416ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1002 20:08:15.409866  882884 retry.go:31] will retry after 420.040838ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-460513 /tmp/TestFunctionalparallelMountCmdspecific-port4193474993/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-460513 ssh "sudo umount -f /mount-9p": exit status 1 (284.970253ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-460513 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-460513 /tmp/TestFunctionalparallelMountCmdspecific-port4193474993/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.81s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-460513 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4183746141/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-460513 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4183746141/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-460513 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4183746141/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-460513 ssh "findmnt -T" /mount1: exit status 1 (545.853089ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1002 20:08:17.433508  882884 retry.go:31] will retry after 441.996231ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-460513 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-460513 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4183746141/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-460513 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4183746141/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-460513 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4183746141/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.91s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-arm64 -p functional-460513 service list: (1.321062901s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-arm64 -p functional-460513 service list -o json: (1.321739275s)
functional_test.go:1504: Took "1.321833717s" to run "out/minikube-linux-arm64 -p functional-460513 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-460513 version -o=json --components: (1.040296789s)
--- PASS: TestFunctional/parallel/Version/components (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-460513 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-460513
docker.io/kicbase/echo-server:functional-460513
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-460513 image ls --format short --alsologtostderr:
I1002 20:12:20.255166  929912 out.go:360] Setting OutFile to fd 1 ...
I1002 20:12:20.255456  929912 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 20:12:20.255473  929912 out.go:374] Setting ErrFile to fd 2...
I1002 20:12:20.255502  929912 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 20:12:20.255951  929912 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-881023/.minikube/bin
I1002 20:12:20.257134  929912 config.go:182] Loaded profile config "functional-460513": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1002 20:12:20.257432  929912 config.go:182] Loaded profile config "functional-460513": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1002 20:12:20.258483  929912 cli_runner.go:164] Run: docker container inspect functional-460513 --format={{.State.Status}}
I1002 20:12:20.280684  929912 ssh_runner.go:195] Run: systemctl --version
I1002 20:12:20.280740  929912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-460513
I1002 20:12:20.298009  929912 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33896 SSHKeyPath:/home/jenkins/minikube-integration/21683-881023/.minikube/machines/functional-460513/id_rsa Username:docker}
I1002 20:12:20.394415  929912 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-460513 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬───────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG        │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼───────────────────┼───────────────┼────────┤
│ registry.k8s.io/pause                       │ latest            │ 8cb2091f603e7 │ 240kB  │
│ localhost/my-image                          │ functional-460513 │ 4a9511bcdd41d │ 1.41MB │
│ docker.io/library/minikube-local-cache-test │ functional-460513 │ 8cf849edbaac5 │ 30B    │
│ registry.k8s.io/kube-apiserver              │ v1.34.1           │ 43911e833d64d │ 83.7MB │
│ docker.io/kicbase/echo-server               │ functional-460513 │ ce2d2cda2d858 │ 4.78MB │
│ registry.k8s.io/pause                       │ 3.3               │ 3d18732f8686c │ 484kB  │
│ registry.k8s.io/kube-controller-manager     │ v1.34.1           │ 7eb2c6ff0c5a7 │ 71.5MB │
│ docker.io/library/nginx                     │ alpine            │ 35f3cbee4fb77 │ 52.9MB │
│ registry.k8s.io/pause                       │ 3.10.1            │ d7b100cd9a77b │ 514kB  │
│ registry.k8s.io/coredns/coredns             │ v1.12.1           │ 138784d87c9c5 │ 72.1MB │
│ registry.k8s.io/etcd                        │ 3.6.4-0           │ a1894772a478e │ 205MB  │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                │ ba04bb24b9575 │ 29MB   │
│ registry.k8s.io/kube-scheduler              │ v1.34.1           │ b5f57ec6b9867 │ 50.5MB │
│ registry.k8s.io/kube-proxy                  │ v1.34.1           │ 05baa95f5142d │ 74.7MB │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc      │ 1611cd07b61d5 │ 3.55MB │
│ registry.k8s.io/pause                       │ 3.1               │ 8057e0500773a │ 525kB  │
└─────────────────────────────────────────────┴───────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-460513 image ls --format table --alsologtostderr:
I1002 20:12:24.526805  930289 out.go:360] Setting OutFile to fd 1 ...
I1002 20:12:24.526971  930289 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 20:12:24.527001  930289 out.go:374] Setting ErrFile to fd 2...
I1002 20:12:24.527023  930289 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 20:12:24.527386  930289 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-881023/.minikube/bin
I1002 20:12:24.528434  930289 config.go:182] Loaded profile config "functional-460513": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1002 20:12:24.528605  930289 config.go:182] Loaded profile config "functional-460513": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1002 20:12:24.529091  930289 cli_runner.go:164] Run: docker container inspect functional-460513 --format={{.State.Status}}
I1002 20:12:24.546874  930289 ssh_runner.go:195] Run: systemctl --version
I1002 20:12:24.546934  930289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-460513
I1002 20:12:24.563988  930289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33896 SSHKeyPath:/home/jenkins/minikube-integration/21683-881023/.minikube/machines/functional-460513/id_rsa Username:docker}
I1002 20:12:24.660380  930289 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-460513 image ls --format json --alsologtostderr:
[{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"4a9511bcdd41dfc632edc37b3a56aaa1b7b72dc68f3fba88683570657596ef73","repoDigests":[],"repoTags":["localhost/my-image:functional-460513"],"size":"1410000"},{"id":"7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"71500000"},{"id":"05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"74700000"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79
d4cb6294acd7d01290ab3babbd","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"514000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-460513"],"size":"4780000"},{"id":"43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"83700000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"50500000"},{"id":"35f3cbee4fb77c3efb39f2723a21ce181906139442a37de8ffc52d89641d9936","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"52900000"},{"id":"a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","repoDigests":[],"repoTags":["registry.k8s
.io/etcd:3.6.4-0"],"size":"205000000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"8cf849edbaac53576e4f1648a90d769d4cfc829c5870cd792c17fea73bcc67f5","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-460513"],"size":"30"},{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"72100000"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-460513 image ls --format json --alsologtostderr:
I1002 20:12:24.302369  930254 out.go:360] Setting OutFile to fd 1 ...
I1002 20:12:24.302568  930254 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 20:12:24.302596  930254 out.go:374] Setting ErrFile to fd 2...
I1002 20:12:24.302617  930254 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 20:12:24.303493  930254 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-881023/.minikube/bin
I1002 20:12:24.304185  930254 config.go:182] Loaded profile config "functional-460513": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1002 20:12:24.304308  930254 config.go:182] Loaded profile config "functional-460513": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1002 20:12:24.304762  930254 cli_runner.go:164] Run: docker container inspect functional-460513 --format={{.State.Status}}
I1002 20:12:24.324256  930254 ssh_runner.go:195] Run: systemctl --version
I1002 20:12:24.324318  930254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-460513
I1002 20:12:24.343865  930254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33896 SSHKeyPath:/home/jenkins/minikube-integration/21683-881023/.minikube/machines/functional-460513/id_rsa Username:docker}
I1002 20:12:24.448098  930254 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-460513 image ls --format yaml --alsologtostderr:
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "72100000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "50500000"
- id: a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "205000000"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10.1
size: "514000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 8cf849edbaac53576e4f1648a90d769d4cfc829c5870cd792c17fea73bcc67f5
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-460513
size: "30"
- id: 7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "71500000"
- id: 05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "74700000"
- id: 35f3cbee4fb77c3efb39f2723a21ce181906139442a37de8ffc52d89641d9936
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "52900000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "83700000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-460513
size: "4780000"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-460513 image ls --format yaml --alsologtostderr:
I1002 20:12:20.473305  929949 out.go:360] Setting OutFile to fd 1 ...
I1002 20:12:20.473435  929949 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 20:12:20.473459  929949 out.go:374] Setting ErrFile to fd 2...
I1002 20:12:20.473471  929949 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 20:12:20.473812  929949 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-881023/.minikube/bin
I1002 20:12:20.474487  929949 config.go:182] Loaded profile config "functional-460513": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1002 20:12:20.474651  929949 config.go:182] Loaded profile config "functional-460513": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1002 20:12:20.475147  929949 cli_runner.go:164] Run: docker container inspect functional-460513 --format={{.State.Status}}
I1002 20:12:20.492930  929949 ssh_runner.go:195] Run: systemctl --version
I1002 20:12:20.492990  929949 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-460513
I1002 20:12:20.513263  929949 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33896 SSHKeyPath:/home/jenkins/minikube-integration/21683-881023/.minikube/machines/functional-460513/id_rsa Username:docker}
I1002 20:12:20.608450  929949 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-460513 ssh pgrep buildkitd: exit status 1 (273.904748ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 image build -t localhost/my-image:functional-460513 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-460513 image build -t localhost/my-image:functional-460513 testdata/build --alsologtostderr: (3.101141433s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-460513 image build -t localhost/my-image:functional-460513 testdata/build --alsologtostderr:
I1002 20:12:20.964144  930055 out.go:360] Setting OutFile to fd 1 ...
I1002 20:12:20.964925  930055 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 20:12:20.964939  930055 out.go:374] Setting ErrFile to fd 2...
I1002 20:12:20.964944  930055 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 20:12:20.965261  930055 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-881023/.minikube/bin
I1002 20:12:20.965903  930055 config.go:182] Loaded profile config "functional-460513": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1002 20:12:20.968034  930055 config.go:182] Loaded profile config "functional-460513": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1002 20:12:20.968528  930055 cli_runner.go:164] Run: docker container inspect functional-460513 --format={{.State.Status}}
I1002 20:12:20.986962  930055 ssh_runner.go:195] Run: systemctl --version
I1002 20:12:20.987029  930055 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-460513
I1002 20:12:21.007129  930055 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33896 SSHKeyPath:/home/jenkins/minikube-integration/21683-881023/.minikube/machines/functional-460513/id_rsa Username:docker}
I1002 20:12:21.108201  930055 build_images.go:161] Building image from path: /tmp/build.2240571193.tar
I1002 20:12:21.108270  930055 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1002 20:12:21.116472  930055 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2240571193.tar
I1002 20:12:21.120558  930055 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2240571193.tar: stat -c "%s %y" /var/lib/minikube/build/build.2240571193.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2240571193.tar': No such file or directory
I1002 20:12:21.120590  930055 ssh_runner.go:362] scp /tmp/build.2240571193.tar --> /var/lib/minikube/build/build.2240571193.tar (3072 bytes)
I1002 20:12:21.139073  930055 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2240571193
I1002 20:12:21.146856  930055 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2240571193 -xf /var/lib/minikube/build/build.2240571193.tar
I1002 20:12:21.154984  930055 docker.go:361] Building image: /var/lib/minikube/build/build.2240571193
I1002 20:12:21.155104  930055 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-460513 /var/lib/minikube/build/build.2240571193
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.5s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.3s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 writing image sha256:4a9511bcdd41dfc632edc37b3a56aaa1b7b72dc68f3fba88683570657596ef73 done
#8 naming to localhost/my-image:functional-460513 done
#8 DONE 0.1s
I1002 20:12:23.979117  930055 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-460513 /var/lib/minikube/build/build.2240571193: (2.82398559s)
I1002 20:12:23.979216  930055 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2240571193
I1002 20:12:23.987429  930055 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2240571193.tar
I1002 20:12:23.995474  930055 build_images.go:217] Built localhost/my-image:functional-460513 from /tmp/build.2240571193.tar
I1002 20:12:23.995511  930055 build_images.go:133] succeeded building to: functional-460513
I1002 20:12:23.995517  930055 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-460513
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 image load --daemon kicbase/echo-server:functional-460513 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 image load --daemon kicbase/echo-server:functional-460513 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-460513
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 image load --daemon kicbase/echo-server:functional-460513 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 image save kicbase/echo-server:functional-460513 /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 image rm kicbase/echo-server:functional-460513 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 image load /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-460513
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 image save --daemon kicbase/echo-server:functional-460513 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-460513
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-460513 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:514: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-460513 docker-env) && out/minikube-linux-arm64 status -p functional-460513"
functional_test.go:537: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-460513 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.07s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-460513
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-460513
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-460513
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (171.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker
E1002 20:15:32.271527  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/addons-660088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-913390 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker: (2m50.163170997s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (171.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-913390 kubectl -- rollout status deployment/busybox: (4.316037714s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 kubectl -- exec busybox-7b57f96db7-kptj7 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 kubectl -- exec busybox-7b57f96db7-mtcj6 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 kubectl -- exec busybox-7b57f96db7-t6mxh -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 kubectl -- exec busybox-7b57f96db7-kptj7 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 kubectl -- exec busybox-7b57f96db7-mtcj6 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 kubectl -- exec busybox-7b57f96db7-t6mxh -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 kubectl -- exec busybox-7b57f96db7-kptj7 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 kubectl -- exec busybox-7b57f96db7-mtcj6 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 kubectl -- exec busybox-7b57f96db7-t6mxh -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 kubectl -- exec busybox-7b57f96db7-kptj7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 kubectl -- exec busybox-7b57f96db7-kptj7 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 kubectl -- exec busybox-7b57f96db7-mtcj6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 kubectl -- exec busybox-7b57f96db7-mtcj6 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 kubectl -- exec busybox-7b57f96db7-t6mxh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 kubectl -- exec busybox-7b57f96db7-t6mxh -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (36.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-913390 node add --alsologtostderr -v 5: (35.316151012s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-913390 status --alsologtostderr -v 5: (1.228346183s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (36.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-913390 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.21632563s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (21.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-913390 status --output json --alsologtostderr -v 5: (1.162934931s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 cp testdata/cp-test.txt ha-913390:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 ssh -n ha-913390 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 cp ha-913390:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2421500043/001/cp-test_ha-913390.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 ssh -n ha-913390 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 cp ha-913390:/home/docker/cp-test.txt ha-913390-m02:/home/docker/cp-test_ha-913390_ha-913390-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 ssh -n ha-913390 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 ssh -n ha-913390-m02 "sudo cat /home/docker/cp-test_ha-913390_ha-913390-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 cp ha-913390:/home/docker/cp-test.txt ha-913390-m03:/home/docker/cp-test_ha-913390_ha-913390-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 ssh -n ha-913390 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 ssh -n ha-913390-m03 "sudo cat /home/docker/cp-test_ha-913390_ha-913390-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 cp ha-913390:/home/docker/cp-test.txt ha-913390-m04:/home/docker/cp-test_ha-913390_ha-913390-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 ssh -n ha-913390 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 ssh -n ha-913390-m04 "sudo cat /home/docker/cp-test_ha-913390_ha-913390-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 cp testdata/cp-test.txt ha-913390-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 ssh -n ha-913390-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 cp ha-913390-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2421500043/001/cp-test_ha-913390-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 ssh -n ha-913390-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 cp ha-913390-m02:/home/docker/cp-test.txt ha-913390:/home/docker/cp-test_ha-913390-m02_ha-913390.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 ssh -n ha-913390-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 ssh -n ha-913390 "sudo cat /home/docker/cp-test_ha-913390-m02_ha-913390.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 cp ha-913390-m02:/home/docker/cp-test.txt ha-913390-m03:/home/docker/cp-test_ha-913390-m02_ha-913390-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 ssh -n ha-913390-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 ssh -n ha-913390-m03 "sudo cat /home/docker/cp-test_ha-913390-m02_ha-913390-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 cp ha-913390-m02:/home/docker/cp-test.txt ha-913390-m04:/home/docker/cp-test_ha-913390-m02_ha-913390-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 ssh -n ha-913390-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 ssh -n ha-913390-m04 "sudo cat /home/docker/cp-test_ha-913390-m02_ha-913390-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 cp testdata/cp-test.txt ha-913390-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 ssh -n ha-913390-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 cp ha-913390-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2421500043/001/cp-test_ha-913390-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 ssh -n ha-913390-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 cp ha-913390-m03:/home/docker/cp-test.txt ha-913390:/home/docker/cp-test_ha-913390-m03_ha-913390.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 ssh -n ha-913390-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 ssh -n ha-913390 "sudo cat /home/docker/cp-test_ha-913390-m03_ha-913390.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 cp ha-913390-m03:/home/docker/cp-test.txt ha-913390-m02:/home/docker/cp-test_ha-913390-m03_ha-913390-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 ssh -n ha-913390-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 ssh -n ha-913390-m02 "sudo cat /home/docker/cp-test_ha-913390-m03_ha-913390-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 cp ha-913390-m03:/home/docker/cp-test.txt ha-913390-m04:/home/docker/cp-test_ha-913390-m03_ha-913390-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 ssh -n ha-913390-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 ssh -n ha-913390-m04 "sudo cat /home/docker/cp-test_ha-913390-m03_ha-913390-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 cp testdata/cp-test.txt ha-913390-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 ssh -n ha-913390-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 cp ha-913390-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2421500043/001/cp-test_ha-913390-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 ssh -n ha-913390-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 cp ha-913390-m04:/home/docker/cp-test.txt ha-913390:/home/docker/cp-test_ha-913390-m04_ha-913390.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 ssh -n ha-913390-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 ssh -n ha-913390 "sudo cat /home/docker/cp-test_ha-913390-m04_ha-913390.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 cp ha-913390-m04:/home/docker/cp-test.txt ha-913390-m02:/home/docker/cp-test_ha-913390-m04_ha-913390-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 ssh -n ha-913390-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 ssh -n ha-913390-m02 "sudo cat /home/docker/cp-test_ha-913390-m04_ha-913390-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 cp ha-913390-m04:/home/docker/cp-test.txt ha-913390-m03:/home/docker/cp-test_ha-913390-m04_ha-913390-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 ssh -n ha-913390-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 ssh -n ha-913390-m03 "sudo cat /home/docker/cp-test_ha-913390-m04_ha-913390-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (21.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-913390 node stop m02 --alsologtostderr -v 5: (11.096344616s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-913390 status --alsologtostderr -v 5: exit status 7 (821.295612ms)

                                                
                                                
-- stdout --
	ha-913390
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-913390-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-913390-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-913390-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 20:17:35.643628  952685 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:17:35.643846  952685 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:17:35.643860  952685 out.go:374] Setting ErrFile to fd 2...
	I1002 20:17:35.643865  952685 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:17:35.644127  952685 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-881023/.minikube/bin
	I1002 20:17:35.644322  952685 out.go:368] Setting JSON to false
	I1002 20:17:35.644377  952685 mustload.go:65] Loading cluster: ha-913390
	I1002 20:17:35.644445  952685 notify.go:221] Checking for updates...
	I1002 20:17:35.644778  952685 config.go:182] Loaded profile config "ha-913390": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1002 20:17:35.644798  952685 status.go:174] checking status of ha-913390 ...
	I1002 20:17:35.645830  952685 cli_runner.go:164] Run: docker container inspect ha-913390 --format={{.State.Status}}
	I1002 20:17:35.668321  952685 status.go:371] ha-913390 host status = "Running" (err=<nil>)
	I1002 20:17:35.668368  952685 host.go:66] Checking if "ha-913390" exists ...
	I1002 20:17:35.668725  952685 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-913390
	I1002 20:17:35.701645  952685 host.go:66] Checking if "ha-913390" exists ...
	I1002 20:17:35.701955  952685 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 20:17:35.701995  952685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-913390
	I1002 20:17:35.728545  952685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33901 SSHKeyPath:/home/jenkins/minikube-integration/21683-881023/.minikube/machines/ha-913390/id_rsa Username:docker}
	I1002 20:17:35.832301  952685 ssh_runner.go:195] Run: systemctl --version
	I1002 20:17:35.839329  952685 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 20:17:35.856846  952685 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:17:35.940170  952685 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-10-02 20:17:35.929513876 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 20:17:35.940769  952685 kubeconfig.go:125] found "ha-913390" server: "https://192.168.49.254:8443"
	I1002 20:17:35.940828  952685 api_server.go:166] Checking apiserver status ...
	I1002 20:17:35.940895  952685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:17:35.956918  952685 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2157/cgroup
	I1002 20:17:35.965851  952685 api_server.go:182] apiserver freezer: "7:freezer:/docker/908081f757d207d0ba2fabab34d8da8297ec813ae77c3f0892c6632cd32a24f8/kubepods/burstable/pod67caa945a1341a7dcb61b139528d17b7/7ade248e3d764eddf13b5c22bd134ddbf43d1c21cd9cafba7f87971b90dcfd86"
	I1002 20:17:35.965944  952685 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/908081f757d207d0ba2fabab34d8da8297ec813ae77c3f0892c6632cd32a24f8/kubepods/burstable/pod67caa945a1341a7dcb61b139528d17b7/7ade248e3d764eddf13b5c22bd134ddbf43d1c21cd9cafba7f87971b90dcfd86/freezer.state
	I1002 20:17:35.981564  952685 api_server.go:204] freezer state: "THAWED"
	I1002 20:17:35.981593  952685 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1002 20:17:35.990287  952685 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1002 20:17:35.990318  952685 status.go:463] ha-913390 apiserver status = Running (err=<nil>)
	I1002 20:17:35.990330  952685 status.go:176] ha-913390 status: &{Name:ha-913390 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 20:17:35.990347  952685 status.go:174] checking status of ha-913390-m02 ...
	I1002 20:17:35.990652  952685 cli_runner.go:164] Run: docker container inspect ha-913390-m02 --format={{.State.Status}}
	I1002 20:17:36.016697  952685 status.go:371] ha-913390-m02 host status = "Stopped" (err=<nil>)
	I1002 20:17:36.016723  952685 status.go:384] host is not running, skipping remaining checks
	I1002 20:17:36.016731  952685 status.go:176] ha-913390-m02 status: &{Name:ha-913390-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 20:17:36.016756  952685 status.go:174] checking status of ha-913390-m03 ...
	I1002 20:17:36.017113  952685 cli_runner.go:164] Run: docker container inspect ha-913390-m03 --format={{.State.Status}}
	I1002 20:17:36.036596  952685 status.go:371] ha-913390-m03 host status = "Running" (err=<nil>)
	I1002 20:17:36.036624  952685 host.go:66] Checking if "ha-913390-m03" exists ...
	I1002 20:17:36.037046  952685 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-913390-m03
	I1002 20:17:36.055824  952685 host.go:66] Checking if "ha-913390-m03" exists ...
	I1002 20:17:36.056144  952685 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 20:17:36.056188  952685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-913390-m03
	I1002 20:17:36.075643  952685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33911 SSHKeyPath:/home/jenkins/minikube-integration/21683-881023/.minikube/machines/ha-913390-m03/id_rsa Username:docker}
	I1002 20:17:36.171573  952685 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 20:17:36.186588  952685 kubeconfig.go:125] found "ha-913390" server: "https://192.168.49.254:8443"
	I1002 20:17:36.186617  952685 api_server.go:166] Checking apiserver status ...
	I1002 20:17:36.186661  952685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:17:36.201015  952685 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2057/cgroup
	I1002 20:17:36.210038  952685 api_server.go:182] apiserver freezer: "7:freezer:/docker/576145cbda8e0f21f8f5671c7b554064a044525bae54f8be865e2d86ba3754ae/kubepods/burstable/pod1ebff19b119de4596e1f39f1e0e5486d/c74c132c2c3908983d87d25de48a51173574aee48091a974c837eb044411bbe5"
	I1002 20:17:36.210161  952685 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/576145cbda8e0f21f8f5671c7b554064a044525bae54f8be865e2d86ba3754ae/kubepods/burstable/pod1ebff19b119de4596e1f39f1e0e5486d/c74c132c2c3908983d87d25de48a51173574aee48091a974c837eb044411bbe5/freezer.state
	I1002 20:17:36.218930  952685 api_server.go:204] freezer state: "THAWED"
	I1002 20:17:36.218958  952685 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1002 20:17:36.227114  952685 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1002 20:17:36.227142  952685 status.go:463] ha-913390-m03 apiserver status = Running (err=<nil>)
	I1002 20:17:36.227152  952685 status.go:176] ha-913390-m03 status: &{Name:ha-913390-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 20:17:36.227170  952685 status.go:174] checking status of ha-913390-m04 ...
	I1002 20:17:36.227493  952685 cli_runner.go:164] Run: docker container inspect ha-913390-m04 --format={{.State.Status}}
	I1002 20:17:36.248050  952685 status.go:371] ha-913390-m04 host status = "Running" (err=<nil>)
	I1002 20:17:36.248072  952685 host.go:66] Checking if "ha-913390-m04" exists ...
	I1002 20:17:36.248380  952685 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-913390-m04
	I1002 20:17:36.270737  952685 host.go:66] Checking if "ha-913390-m04" exists ...
	I1002 20:17:36.271039  952685 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 20:17:36.271091  952685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-913390-m04
	I1002 20:17:36.289007  952685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33916 SSHKeyPath:/home/jenkins/minikube-integration/21683-881023/.minikube/machines/ha-913390-m04/id_rsa Username:docker}
	I1002 20:17:36.390814  952685 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 20:17:36.409894  952685 status.go:176] ha-913390-m04 status: &{Name:ha-913390-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (65.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 node start m02 --alsologtostderr -v 5
E1002 20:17:53.630689  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/functional-460513/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:17:53.637089  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/functional-460513/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:17:53.648543  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/functional-460513/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:17:53.669966  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/functional-460513/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:17:53.711338  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/functional-460513/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:17:53.792873  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/functional-460513/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:17:53.954370  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/functional-460513/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:17:54.275949  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/functional-460513/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:17:54.917966  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/functional-460513/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:17:56.199435  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/functional-460513/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:17:58.761613  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/functional-460513/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:18:03.882890  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/functional-460513/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:18:14.125248  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/functional-460513/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:18:34.607435  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/functional-460513/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-913390 node start m02 --alsologtostderr -v 5: (1m3.992401069s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-913390 status --alsologtostderr -v 5: (1.245987298s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (65.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.168928427s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (196.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 stop --alsologtostderr -v 5
E1002 20:19:15.574958  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/functional-460513/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-913390 stop --alsologtostderr -v 5: (34.803069113s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 start --wait true --alsologtostderr -v 5
E1002 20:20:32.272015  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/addons-660088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:20:37.497636  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/functional-460513/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-913390 start --wait true --alsologtostderr -v 5: (2m41.835582723s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (196.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-913390 node delete m03 --alsologtostderr -v 5: (9.994760096s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (33.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-913390 stop --alsologtostderr -v 5: (32.907289957s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-913390 status --alsologtostderr -v 5: exit status 7 (179.788461ms)

                                                
                                                
-- stdout --
	ha-913390
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-913390-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-913390-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 20:22:45.417634  980140 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:22:45.417922  980140 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:22:45.417937  980140 out.go:374] Setting ErrFile to fd 2...
	I1002 20:22:45.417944  980140 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:22:45.418257  980140 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-881023/.minikube/bin
	I1002 20:22:45.418499  980140 out.go:368] Setting JSON to false
	I1002 20:22:45.418555  980140 mustload.go:65] Loading cluster: ha-913390
	I1002 20:22:45.418624  980140 notify.go:221] Checking for updates...
	I1002 20:22:45.419087  980140 config.go:182] Loaded profile config "ha-913390": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1002 20:22:45.419120  980140 status.go:174] checking status of ha-913390 ...
	I1002 20:22:45.420153  980140 cli_runner.go:164] Run: docker container inspect ha-913390 --format={{.State.Status}}
	I1002 20:22:45.441765  980140 status.go:371] ha-913390 host status = "Stopped" (err=<nil>)
	I1002 20:22:45.441792  980140 status.go:384] host is not running, skipping remaining checks
	I1002 20:22:45.441801  980140 status.go:176] ha-913390 status: &{Name:ha-913390 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 20:22:45.441837  980140 status.go:174] checking status of ha-913390-m02 ...
	I1002 20:22:45.442224  980140 cli_runner.go:164] Run: docker container inspect ha-913390-m02 --format={{.State.Status}}
	I1002 20:22:45.475062  980140 status.go:371] ha-913390-m02 host status = "Stopped" (err=<nil>)
	I1002 20:22:45.475088  980140 status.go:384] host is not running, skipping remaining checks
	I1002 20:22:45.475097  980140 status.go:176] ha-913390-m02 status: &{Name:ha-913390-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 20:22:45.475120  980140 status.go:174] checking status of ha-913390-m04 ...
	I1002 20:22:45.475447  980140 cli_runner.go:164] Run: docker container inspect ha-913390-m04 --format={{.State.Status}}
	I1002 20:22:45.498054  980140 status.go:371] ha-913390-m04 host status = "Stopped" (err=<nil>)
	I1002 20:22:45.498090  980140 status.go:384] host is not running, skipping remaining checks
	I1002 20:22:45.498101  980140 status.go:176] ha-913390-m04 status: &{Name:ha-913390-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (33.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (111.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker
E1002 20:22:53.630626  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/functional-460513/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:23:21.345892  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/functional-460513/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-913390 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker: (1m50.764348386s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (111.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (55.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 node add --control-plane --alsologtostderr -v 5
E1002 20:25:32.271204  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/addons-660088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-913390 node add --control-plane --alsologtostderr -v 5: (54.691518319s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-913390 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-913390 status --alsologtostderr -v 5: (1.137004967s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (55.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.154824624s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.16s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (33.61s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -p image-534325 --driver=docker  --container-runtime=docker
image_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -p image-534325 --driver=docker  --container-runtime=docker: (33.606932256s)
--- PASS: TestImageBuild/serial/Setup (33.61s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.94s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-534325
image_test.go:78: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-534325: (1.935877834s)
--- PASS: TestImageBuild/serial/NormalBuild (1.94s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.95s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-534325
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.95s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.87s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-534325
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.87s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.93s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-534325
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.93s)

                                                
                                    
x
+
TestJSONOutput/start/Command (74.37s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-442302 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=docker
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-442302 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=docker: (1m14.368572226s)
--- PASS: TestJSONOutput/start/Command (74.37s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-442302 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-442302 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.85s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-442302 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-442302 --output=json --user=testUser: (5.851773871s)
--- PASS: TestJSONOutput/stop/Command (5.85s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-110646 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-110646 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (92.96868ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ab429ed3-5f9f-4597-8ede-1092c5c34822","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-110646] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"173c5eb2-14d6-4834-b957-a4571e0da06e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21683"}}
	{"specversion":"1.0","id":"50958902-cfe2-4759-88f1-9022c38c07c4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"9552b2da-9e0e-4d43-8918-f5de13529527","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21683-881023/kubeconfig"}}
	{"specversion":"1.0","id":"107f6a7d-6885-4ad3-ae55-704e761951a3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-881023/.minikube"}}
	{"specversion":"1.0","id":"59f484d3-bccf-4356-aa39-62c4f896c4b0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"b1c4cc96-b481-4823-835c-c4dcf82df463","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"9bef4ff8-ea44-4953-8769-823ed24e0097","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-110646" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-110646
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (38.32s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-406905 --network=
E1002 20:27:53.629947  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/functional-460513/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-406905 --network=: (36.108853829s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-406905" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-406905
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-406905: (2.191902485s)
--- PASS: TestKicCustomNetwork/create_custom_network (38.32s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (37.56s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-203829 --network=bridge
E1002 20:28:35.346928  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/addons-660088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-203829 --network=bridge: (35.465965939s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-203829" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-203829
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-203829: (2.060812441s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (37.56s)

                                                
                                    
x
+
TestKicExistingNetwork (34.13s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1002 20:29:03.826930  882884 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1002 20:29:03.842764  882884 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1002 20:29:03.842845  882884 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1002 20:29:03.842864  882884 cli_runner.go:164] Run: docker network inspect existing-network
W1002 20:29:03.859685  882884 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1002 20:29:03.859715  882884 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1002 20:29:03.859731  882884 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1002 20:29:03.859851  882884 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1002 20:29:03.877036  882884 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-2a9ba96fc5e4 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:26:60:2e:cb:74:62} reservation:<nil>}
I1002 20:29:03.881824  882884 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
I1002 20:29:03.882206  882884 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001d467b0}
I1002 20:29:03.882791  882884 network_create.go:124] attempt to create docker network existing-network 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
I1002 20:29:03.882866  882884 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1002 20:29:03.943282  882884 network_create.go:108] docker network existing-network 192.168.67.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-305910 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-305910 --network=existing-network: (31.951107759s)
helpers_test.go:175: Cleaning up "existing-network-305910" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-305910
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-305910: (2.026436629s)
I1002 20:29:37.940112  882884 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (34.13s)

                                                
                                    
x
+
TestKicCustomSubnet (36.37s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-957061 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-957061 --subnet=192.168.60.0/24: (34.156449271s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-957061 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-957061" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-957061
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-957061: (2.186595061s)
--- PASS: TestKicCustomSubnet (36.37s)

                                                
                                    
x
+
TestKicStaticIP (37.56s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-850567 --static-ip=192.168.200.200
E1002 20:30:32.273347  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/addons-660088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-850567 --static-ip=192.168.200.200: (35.193769446s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-850567 ip
helpers_test.go:175: Cleaning up "static-ip-850567" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-850567
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-850567: (2.201923946s)
--- PASS: TestKicStaticIP (37.56s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (74.68s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-550319 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-550319 --driver=docker  --container-runtime=docker: (32.790443141s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-553018 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-553018 --driver=docker  --container-runtime=docker: (35.89021542s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-550319
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-553018
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-553018" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-553018
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-553018: (2.261003034s)
helpers_test.go:175: Cleaning up "first-550319" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-550319
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-550319: (2.162106428s)
--- PASS: TestMinikubeProfile (74.68s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (11.01s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-339440 --memory=3072 --mount-string /tmp/TestMountStartserial3900533170/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-339440 --memory=3072 --mount-string /tmp/TestMountStartserial3900533170/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (10.012639565s)
--- PASS: TestMountStart/serial/StartWithMountFirst (11.01s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-339440 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.96s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-341713 --memory=3072 --mount-string /tmp/TestMountStartserial3900533170/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-341713 --memory=3072 --mount-string /tmp/TestMountStartserial3900533170/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (7.964507958s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.96s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-341713 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.48s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-339440 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-339440 --alsologtostderr -v=5: (1.478999748s)
--- PASS: TestMountStart/serial/DeleteFirst (1.48s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-341713 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-341713
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-341713: (1.253029382s)
--- PASS: TestMountStart/serial/Stop (1.25s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.7s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-341713
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-341713: (7.699373622s)
--- PASS: TestMountStart/serial/RestartStopped (8.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-341713 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (90.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-346064 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=docker
E1002 20:32:53.629961  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/functional-460513/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-346064 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=docker: (1m30.033376801s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346064 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (90.61s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-346064 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-346064 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-346064 -- rollout status deployment/busybox: (3.306839272s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-346064 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-346064 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-346064 -- exec busybox-7b57f96db7-4lslk -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-346064 -- exec busybox-7b57f96db7-zzblr -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-346064 -- exec busybox-7b57f96db7-4lslk -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-346064 -- exec busybox-7b57f96db7-zzblr -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-346064 -- exec busybox-7b57f96db7-4lslk -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-346064 -- exec busybox-7b57f96db7-zzblr -- nslookup kubernetes.default.svc.cluster.local
E1002 20:34:16.707361  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/functional-460513/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.38s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-346064 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-346064 -- exec busybox-7b57f96db7-4lslk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-346064 -- exec busybox-7b57f96db7-4lslk -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-346064 -- exec busybox-7b57f96db7-zzblr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-346064 -- exec busybox-7b57f96db7-zzblr -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.06s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (35.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-346064 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-346064 -v=5 --alsologtostderr: (34.744441388s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346064 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (35.47s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-346064 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.70s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346064 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346064 cp testdata/cp-test.txt multinode-346064:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346064 ssh -n multinode-346064 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346064 cp multinode-346064:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1749824121/001/cp-test_multinode-346064.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346064 ssh -n multinode-346064 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346064 cp multinode-346064:/home/docker/cp-test.txt multinode-346064-m02:/home/docker/cp-test_multinode-346064_multinode-346064-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346064 ssh -n multinode-346064 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346064 ssh -n multinode-346064-m02 "sudo cat /home/docker/cp-test_multinode-346064_multinode-346064-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346064 cp multinode-346064:/home/docker/cp-test.txt multinode-346064-m03:/home/docker/cp-test_multinode-346064_multinode-346064-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346064 ssh -n multinode-346064 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346064 ssh -n multinode-346064-m03 "sudo cat /home/docker/cp-test_multinode-346064_multinode-346064-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346064 cp testdata/cp-test.txt multinode-346064-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346064 ssh -n multinode-346064-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346064 cp multinode-346064-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1749824121/001/cp-test_multinode-346064-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346064 ssh -n multinode-346064-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346064 cp multinode-346064-m02:/home/docker/cp-test.txt multinode-346064:/home/docker/cp-test_multinode-346064-m02_multinode-346064.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346064 ssh -n multinode-346064-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346064 ssh -n multinode-346064 "sudo cat /home/docker/cp-test_multinode-346064-m02_multinode-346064.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346064 cp multinode-346064-m02:/home/docker/cp-test.txt multinode-346064-m03:/home/docker/cp-test_multinode-346064-m02_multinode-346064-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346064 ssh -n multinode-346064-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346064 ssh -n multinode-346064-m03 "sudo cat /home/docker/cp-test_multinode-346064-m02_multinode-346064-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346064 cp testdata/cp-test.txt multinode-346064-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346064 ssh -n multinode-346064-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346064 cp multinode-346064-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1749824121/001/cp-test_multinode-346064-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346064 ssh -n multinode-346064-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346064 cp multinode-346064-m03:/home/docker/cp-test.txt multinode-346064:/home/docker/cp-test_multinode-346064-m03_multinode-346064.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346064 ssh -n multinode-346064-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346064 ssh -n multinode-346064 "sudo cat /home/docker/cp-test_multinode-346064-m03_multinode-346064.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346064 cp multinode-346064-m03:/home/docker/cp-test.txt multinode-346064-m02:/home/docker/cp-test_multinode-346064-m03_multinode-346064-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346064 ssh -n multinode-346064-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346064 ssh -n multinode-346064-m02 "sudo cat /home/docker/cp-test_multinode-346064-m03_multinode-346064-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.89s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346064 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-346064 node stop m03: (1.272188753s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346064 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-346064 status: exit status 7 (516.65862ms)

                                                
                                                
-- stdout --
	multinode-346064
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-346064-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-346064-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346064 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-346064 status --alsologtostderr: exit status 7 (565.04772ms)

                                                
                                                
-- stdout --
	multinode-346064
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-346064-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-346064-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 20:35:06.769597 1053617 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:35:06.770039 1053617 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:35:06.770077 1053617 out.go:374] Setting ErrFile to fd 2...
	I1002 20:35:06.770097 1053617 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:35:06.770415 1053617 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-881023/.minikube/bin
	I1002 20:35:06.770654 1053617 out.go:368] Setting JSON to false
	I1002 20:35:06.770725 1053617 mustload.go:65] Loading cluster: multinode-346064
	I1002 20:35:06.770794 1053617 notify.go:221] Checking for updates...
	I1002 20:35:06.771214 1053617 config.go:182] Loaded profile config "multinode-346064": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1002 20:35:06.771253 1053617 status.go:174] checking status of multinode-346064 ...
	I1002 20:35:06.771811 1053617 cli_runner.go:164] Run: docker container inspect multinode-346064 --format={{.State.Status}}
	I1002 20:35:06.794834 1053617 status.go:371] multinode-346064 host status = "Running" (err=<nil>)
	I1002 20:35:06.794856 1053617 host.go:66] Checking if "multinode-346064" exists ...
	I1002 20:35:06.795151 1053617 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-346064
	I1002 20:35:06.819133 1053617 host.go:66] Checking if "multinode-346064" exists ...
	I1002 20:35:06.819430 1053617 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 20:35:06.819481 1053617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-346064
	I1002 20:35:06.844976 1053617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34026 SSHKeyPath:/home/jenkins/minikube-integration/21683-881023/.minikube/machines/multinode-346064/id_rsa Username:docker}
	I1002 20:35:06.943157 1053617 ssh_runner.go:195] Run: systemctl --version
	I1002 20:35:06.949828 1053617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 20:35:06.970514 1053617 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:35:07.040390 1053617 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-02 20:35:07.02996576 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 20:35:07.040954 1053617 kubeconfig.go:125] found "multinode-346064" server: "https://192.168.58.2:8443"
	I1002 20:35:07.041001 1053617 api_server.go:166] Checking apiserver status ...
	I1002 20:35:07.041057 1053617 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:35:07.056255 1053617 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2117/cgroup
	I1002 20:35:07.065215 1053617 api_server.go:182] apiserver freezer: "7:freezer:/docker/81d23805b7c1127fd706a5d471e37fa43f1bdc61abac41a93f1b216b9aea0963/kubepods/burstable/podf7a8e0bb7f54de77220a4f6252aa4041/a639522336843c06c1f17b7a5eb93d0321e04465ed125295ab9f80aebdc09413"
	I1002 20:35:07.065285 1053617 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/81d23805b7c1127fd706a5d471e37fa43f1bdc61abac41a93f1b216b9aea0963/kubepods/burstable/podf7a8e0bb7f54de77220a4f6252aa4041/a639522336843c06c1f17b7a5eb93d0321e04465ed125295ab9f80aebdc09413/freezer.state
	I1002 20:35:07.073396 1053617 api_server.go:204] freezer state: "THAWED"
	I1002 20:35:07.073467 1053617 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1002 20:35:07.081888 1053617 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1002 20:35:07.081977 1053617 status.go:463] multinode-346064 apiserver status = Running (err=<nil>)
	I1002 20:35:07.081995 1053617 status.go:176] multinode-346064 status: &{Name:multinode-346064 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 20:35:07.082014 1053617 status.go:174] checking status of multinode-346064-m02 ...
	I1002 20:35:07.082333 1053617 cli_runner.go:164] Run: docker container inspect multinode-346064-m02 --format={{.State.Status}}
	I1002 20:35:07.104124 1053617 status.go:371] multinode-346064-m02 host status = "Running" (err=<nil>)
	I1002 20:35:07.104152 1053617 host.go:66] Checking if "multinode-346064-m02" exists ...
	I1002 20:35:07.104470 1053617 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-346064-m02
	I1002 20:35:07.122086 1053617 host.go:66] Checking if "multinode-346064-m02" exists ...
	I1002 20:35:07.122419 1053617 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 20:35:07.122467 1053617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-346064-m02
	I1002 20:35:07.140900 1053617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34031 SSHKeyPath:/home/jenkins/minikube-integration/21683-881023/.minikube/machines/multinode-346064-m02/id_rsa Username:docker}
	I1002 20:35:07.238540 1053617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 20:35:07.251916 1053617 status.go:176] multinode-346064-m02 status: &{Name:multinode-346064-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1002 20:35:07.251950 1053617 status.go:174] checking status of multinode-346064-m03 ...
	I1002 20:35:07.252265 1053617 cli_runner.go:164] Run: docker container inspect multinode-346064-m03 --format={{.State.Status}}
	I1002 20:35:07.272238 1053617 status.go:371] multinode-346064-m03 host status = "Stopped" (err=<nil>)
	I1002 20:35:07.272265 1053617 status.go:384] host is not running, skipping remaining checks
	I1002 20:35:07.272273 1053617 status.go:176] multinode-346064-m03 status: &{Name:multinode-346064-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.35s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346064 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-346064 node start m03 -v=5 --alsologtostderr: (8.924509984s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346064 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.70s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (78.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-346064
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-346064
E1002 20:35:32.271998  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/addons-660088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-346064: (22.867710124s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-346064 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-346064 --wait=true -v=5 --alsologtostderr: (55.514332036s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-346064
--- PASS: TestMultiNode/serial/RestartKeepsNodes (78.53s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346064 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-346064 node delete m03: (5.016634978s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346064 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.74s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346064 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-346064 stop: (21.697663173s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346064 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-346064 status: exit status 7 (108.701005ms)

                                                
                                                
-- stdout --
	multinode-346064
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-346064-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346064 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-346064 status --alsologtostderr: exit status 7 (89.931939ms)

                                                
                                                
-- stdout --
	multinode-346064
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-346064-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 20:37:03.098313 1067314 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:37:03.098465 1067314 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:37:03.098475 1067314 out.go:374] Setting ErrFile to fd 2...
	I1002 20:37:03.098480 1067314 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:37:03.098737 1067314 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-881023/.minikube/bin
	I1002 20:37:03.098929 1067314 out.go:368] Setting JSON to false
	I1002 20:37:03.098962 1067314 mustload.go:65] Loading cluster: multinode-346064
	I1002 20:37:03.098997 1067314 notify.go:221] Checking for updates...
	I1002 20:37:03.099351 1067314 config.go:182] Loaded profile config "multinode-346064": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1002 20:37:03.099369 1067314 status.go:174] checking status of multinode-346064 ...
	I1002 20:37:03.099881 1067314 cli_runner.go:164] Run: docker container inspect multinode-346064 --format={{.State.Status}}
	I1002 20:37:03.121319 1067314 status.go:371] multinode-346064 host status = "Stopped" (err=<nil>)
	I1002 20:37:03.121346 1067314 status.go:384] host is not running, skipping remaining checks
	I1002 20:37:03.121353 1067314 status.go:176] multinode-346064 status: &{Name:multinode-346064 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 20:37:03.121384 1067314 status.go:174] checking status of multinode-346064-m02 ...
	I1002 20:37:03.121700 1067314 cli_runner.go:164] Run: docker container inspect multinode-346064-m02 --format={{.State.Status}}
	I1002 20:37:03.140584 1067314 status.go:371] multinode-346064-m02 host status = "Stopped" (err=<nil>)
	I1002 20:37:03.140612 1067314 status.go:384] host is not running, skipping remaining checks
	I1002 20:37:03.140620 1067314 status.go:176] multinode-346064-m02 status: &{Name:multinode-346064-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.90s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (52.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-346064 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=docker
E1002 20:37:53.630986  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/functional-460513/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-346064 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=docker: (51.493426477s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346064 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (52.22s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (39.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-346064
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-346064-m02 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-346064-m02 --driver=docker  --container-runtime=docker: exit status 14 (94.869241ms)

                                                
                                                
-- stdout --
	* [multinode-346064-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-881023/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-881023/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-346064-m02' is duplicated with machine name 'multinode-346064-m02' in profile 'multinode-346064'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-346064-m03 --driver=docker  --container-runtime=docker
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-346064-m03 --driver=docker  --container-runtime=docker: (36.833677361s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-346064
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-346064: exit status 80 (375.333653ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-346064 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-346064-m03 already exists in multinode-346064-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-346064-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-346064-m03: (2.178322911s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (39.55s)

                                                
                                    
x
+
TestPreload (181.01s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-408910 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-408910 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.0: (1m26.663162714s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-408910 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-arm64 -p test-preload-408910 image pull gcr.io/k8s-minikube/busybox: (2.316090955s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-408910
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-408910: (10.993511406s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-408910 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
E1002 20:40:32.271807  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/addons-660088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-408910 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (1m18.627644467s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-408910 image list
helpers_test.go:175: Cleaning up "test-preload-408910" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-408910
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-408910: (2.1831868s)
--- PASS: TestPreload (181.01s)

                                                
                                    
x
+
TestScheduledStopUnix (105.18s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-341999 --memory=3072 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-341999 --memory=3072 --driver=docker  --container-runtime=docker: (31.91538724s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-341999 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-341999 -n scheduled-stop-341999
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-341999 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1002 20:42:12.566242  882884 retry.go:31] will retry after 95.204µs: open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/scheduled-stop-341999/pid: no such file or directory
I1002 20:42:12.566391  882884 retry.go:31] will retry after 105.152µs: open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/scheduled-stop-341999/pid: no such file or directory
I1002 20:42:12.566651  882884 retry.go:31] will retry after 188.187µs: open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/scheduled-stop-341999/pid: no such file or directory
I1002 20:42:12.567456  882884 retry.go:31] will retry after 337.963µs: open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/scheduled-stop-341999/pid: no such file or directory
I1002 20:42:12.568575  882884 retry.go:31] will retry after 647.257µs: open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/scheduled-stop-341999/pid: no such file or directory
I1002 20:42:12.569631  882884 retry.go:31] will retry after 902.015µs: open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/scheduled-stop-341999/pid: no such file or directory
I1002 20:42:12.571494  882884 retry.go:31] will retry after 1.160549ms: open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/scheduled-stop-341999/pid: no such file or directory
I1002 20:42:12.573664  882884 retry.go:31] will retry after 2.017949ms: open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/scheduled-stop-341999/pid: no such file or directory
I1002 20:42:12.575841  882884 retry.go:31] will retry after 2.063894ms: open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/scheduled-stop-341999/pid: no such file or directory
I1002 20:42:12.578271  882884 retry.go:31] will retry after 4.905437ms: open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/scheduled-stop-341999/pid: no such file or directory
I1002 20:42:12.583479  882884 retry.go:31] will retry after 8.207915ms: open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/scheduled-stop-341999/pid: no such file or directory
I1002 20:42:12.592721  882884 retry.go:31] will retry after 9.953248ms: open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/scheduled-stop-341999/pid: no such file or directory
I1002 20:42:12.602934  882884 retry.go:31] will retry after 15.835502ms: open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/scheduled-stop-341999/pid: no such file or directory
I1002 20:42:12.619158  882884 retry.go:31] will retry after 16.239068ms: open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/scheduled-stop-341999/pid: no such file or directory
I1002 20:42:12.636392  882884 retry.go:31] will retry after 31.323076ms: open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/scheduled-stop-341999/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-341999 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-341999 -n scheduled-stop-341999
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-341999
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-341999 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1002 20:42:53.630444  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/functional-460513/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-341999
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-341999: exit status 7 (66.578921ms)

                                                
                                                
-- stdout --
	scheduled-stop-341999
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-341999 -n scheduled-stop-341999
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-341999 -n scheduled-stop-341999: exit status 7 (74.45392ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-341999" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-341999
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-341999: (1.684805456s)
--- PASS: TestScheduledStopUnix (105.18s)

                                                
                                    
x
+
TestSkaffold (146.37s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe2867988600 version
skaffold_test.go:63: skaffold version: v2.16.1
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p skaffold-796275 --memory=3072 --driver=docker  --container-runtime=docker
skaffold_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p skaffold-796275 --memory=3072 --driver=docker  --container-runtime=docker: (35.939229794s)
skaffold_test.go:86: copying out/minikube-linux-arm64 to /home/jenkins/workspace/Docker_Linux_docker_arm64/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe2867988600 run --minikube-profile skaffold-796275 --kube-context skaffold-796275 --status-check=true --port-forward=false --interactive=false
E1002 20:45:15.349576  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/addons-660088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:45:32.273332  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/addons-660088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe2867988600 run --minikube-profile skaffold-796275 --kube-context skaffold-796275 --status-check=true --port-forward=false --interactive=false: (1m34.004635521s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:352: "leeroy-app-6468866578-2ctqg" [4518232a-680a-4f23-83b2-ca2be365e598] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.003523622s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:352: "leeroy-web-754bd6755-rl7k8" [f74593d4-ec27-42af-9bed-fe8370007d6b] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.003536934s
helpers_test.go:175: Cleaning up "skaffold-796275" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p skaffold-796275
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p skaffold-796275: (3.012950501s)
--- PASS: TestSkaffold (146.37s)

                                                
                                    
x
+
TestInsufficientStorage (14.46s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-220282 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-220282 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (12.129092752s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b14d6c3b-cc9f-4135-a6b6-e23d21b2628b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-220282] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"eb5c143e-4671-42f9-9903-5cbb41ffc3a3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21683"}}
	{"specversion":"1.0","id":"6c82d6da-ba27-40fb-aee3-e9221e714c2f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"5ba3a963-dd58-4687-a811-4689694ef948","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21683-881023/kubeconfig"}}
	{"specversion":"1.0","id":"45a23e9b-2f62-4199-8c38-a87e932a6a26","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-881023/.minikube"}}
	{"specversion":"1.0","id":"3ec33614-ec24-4629-b3c3-08ff51d54557","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"69f13b4e-b4b5-48fd-9696-b22f46cff802","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"59d9ea21-7162-47d7-9cb5-4d5e1fb13226","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"0c81926f-5237-47e6-80fc-fd7b11bcf09c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"5350be45-c4f8-43ef-a8c6-ba625b5d96a1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"0daeb60f-8f64-45a7-b547-d99d6f908f3f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"1d1c68fb-3043-473a-8284-abc159f2a753","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-220282\" primary control-plane node in \"insufficient-storage-220282\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"15bad51b-67d1-4124-bae8-672751d2e48b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1759382731-21643 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"db9c48fe-d79f-481f-83c1-abd01ad5d27b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"fd25fa5a-8d1f-4eb7-be2f-f8a2a346bee0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-220282 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-220282 --output=json --layout=cluster: exit status 7 (322.390938ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-220282","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-220282","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 20:46:04.109737 1101542 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-220282" does not appear in /home/jenkins/minikube-integration/21683-881023/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-220282 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-220282 --output=json --layout=cluster: exit status 7 (308.346796ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-220282","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-220282","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 20:46:04.416334 1101608 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-220282" does not appear in /home/jenkins/minikube-integration/21683-881023/kubeconfig
	E1002 20:46:04.426573 1101608 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/insufficient-storage-220282/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-220282" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-220282
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-220282: (1.694777638s)
--- PASS: TestInsufficientStorage (14.46s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (71.27s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.4079981891 start -p running-upgrade-679634 --memory=3072 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.4079981891 start -p running-upgrade-679634 --memory=3072 --vm-driver=docker  --container-runtime=docker: (36.662230306s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-679634 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E1002 20:51:59.576230  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/skaffold-796275/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-679634 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (31.029367458s)
helpers_test.go:175: Cleaning up "running-upgrade-679634" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-679634
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-679634: (2.326108049s)
--- PASS: TestRunningBinaryUpgrade (71.27s)

                                                
                                    
x
+
TestKubernetesUpgrade (390.81s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-870580 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-870580 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (38.704355132s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-870580
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-870580: (11.102392082s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-870580 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-870580 status --format={{.Host}}: exit status 7 (91.043821ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-870580 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-870580 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m44.763910114s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-870580 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-870580 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-870580 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker: exit status 106 (122.552383ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-870580] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-881023/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-881023/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-870580
	    minikube start -p kubernetes-upgrade-870580 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8705802 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-870580 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-870580 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-870580 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (52.782652878s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-870580" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-870580
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-870580: (3.096216005s)
--- PASS: TestKubernetesUpgrade (390.81s)

                                                
                                    
x
+
TestMissingContainerUpgrade (144.85s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.4138050151 start -p missing-upgrade-041190 --memory=3072 --driver=docker  --container-runtime=docker
E1002 20:47:53.629950  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/functional-460513/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.4138050151 start -p missing-upgrade-041190 --memory=3072 --driver=docker  --container-runtime=docker: (1m10.860016105s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-041190
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-041190: (10.562110416s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-041190
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-041190 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-041190 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (54.745031623s)
helpers_test.go:175: Cleaning up "missing-upgrade-041190" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-041190
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-041190: (2.58588898s)
--- PASS: TestMissingContainerUpgrade (144.85s)

                                                
                                    
x
+
TestPause/serial/Start (88.44s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-447772 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-447772 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (1m28.435172359s)
--- PASS: TestPause/serial/Start (88.44s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (59.76s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-447772 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-447772 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (59.720994331s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (59.76s)

                                                
                                    
x
+
TestPause/serial/Pause (0.93s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-447772 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.93s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.45s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-447772 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-447772 --output=json --layout=cluster: exit status 2 (450.60146ms)

                                                
                                                
-- stdout --
	{"Name":"pause-447772","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-447772","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.45s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.79s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-447772 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.79s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.13s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-447772 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-447772 --alsologtostderr -v=5: (1.127754096s)
--- PASS: TestPause/serial/PauseAgain (1.13s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.77s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-447772 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-447772 --alsologtostderr -v=5: (2.76703882s)
--- PASS: TestPause/serial/DeletePaused (2.77s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (5.36s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (5.29045449s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-447772
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-447772: exit status 1 (21.684373ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-447772: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (5.36s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (7.55s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (7.55s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (69.93s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.2654195160 start -p stopped-upgrade-109204 --memory=3072 --vm-driver=docker  --container-runtime=docker
E1002 20:50:32.271360  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/addons-660088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:50:37.634237  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/skaffold-796275/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:50:37.641916  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/skaffold-796275/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:50:37.654665  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/skaffold-796275/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:50:37.676029  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/skaffold-796275/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:50:37.718076  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/skaffold-796275/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:50:37.799434  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/skaffold-796275/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:50:37.960890  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/skaffold-796275/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:50:38.282218  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/skaffold-796275/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:50:38.924261  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/skaffold-796275/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:50:40.205696  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/skaffold-796275/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.2654195160 start -p stopped-upgrade-109204 --memory=3072 --vm-driver=docker  --container-runtime=docker: (36.670906203s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.2654195160 -p stopped-upgrade-109204 stop
E1002 20:50:42.767745  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/skaffold-796275/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:50:47.890272  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/skaffold-796275/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.2654195160 -p stopped-upgrade-109204 stop: (10.962986152s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-109204 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E1002 20:50:56.710071  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/functional-460513/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:50:58.132151  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/skaffold-796275/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-109204 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (22.295506514s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (69.93s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.11s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-109204
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-109204: (1.112211961s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-974840 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-974840 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker: exit status 14 (99.437997ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-974840] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-881023/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-881023/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (40.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-974840 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E1002 20:53:21.498437  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/skaffold-796275/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-974840 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (39.924086779s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-974840 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (40.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (18.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-974840 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-974840 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (15.792742316s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-974840 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-974840 status -o json: exit status 2 (465.183832ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-974840","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-974840
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-974840: (2.156296269s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (18.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (11.58s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-974840 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-974840 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (11.582087258s)
--- PASS: TestNoKubernetes/serial/Start (11.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-974840 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-974840 "sudo systemctl is-active --quiet service kubelet": exit status 1 (361.633576ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.5s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-974840
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-974840: (1.309715806s)
--- PASS: TestNoKubernetes/serial/Stop (1.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.95s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-974840 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-974840 --driver=docker  --container-runtime=docker: (7.945999711s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-974840 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-974840 "sudo systemctl is-active --quiet service kubelet": exit status 1 (264.783653ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (49.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-657153 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-657153 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0: (49.391657293s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (49.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-657153 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [6e712974-f96f-4555-9de8-cfba8861ae4e] Pending
helpers_test.go:352: "busybox" [6e712974-f96f-4555-9de8-cfba8861ae4e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [6e712974-f96f-4555-9de8-cfba8861ae4e] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003730917s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-657153 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.44s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-657153 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-657153 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.053114979s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-657153 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (10.98s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-657153 --alsologtostderr -v=3
E1002 20:57:53.631037  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/functional-460513/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-657153 --alsologtostderr -v=3: (10.977765186s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (10.98s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-657153 -n old-k8s-version-657153
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-657153 -n old-k8s-version-657153: exit status 7 (96.715498ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-657153 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (29.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-657153 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-657153 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0: (28.632117418s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-657153 -n old-k8s-version-657153
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (29.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-pcwqj" [67e55b79-0958-48a2-be18-5c81c8cd369e] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-pcwqj" [67e55b79-0958-48a2-be18-5c81c8cd369e] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.003859037s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (12.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-pcwqj" [67e55b79-0958-48a2-be18-5c81c8cd369e] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003558942s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-657153 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-657153 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-657153 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-657153 -n old-k8s-version-657153
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-657153 -n old-k8s-version-657153: exit status 2 (338.715106ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-657153 -n old-k8s-version-657153
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-657153 -n old-k8s-version-657153: exit status 2 (343.417657ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-657153 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-657153 -n old-k8s-version-657153
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-657153 -n old-k8s-version-657153
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (83.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-243057 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-243057 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1: (1m23.0383385s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (83.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (81.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-274078 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-274078 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1: (1m21.413902322s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (81.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.49s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-243057 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [d063d990-02f0-4584-9cee-a1699780bbd3] Pending
helpers_test.go:352: "busybox" [d063d990-02f0-4584-9cee-a1699780bbd3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [d063d990-02f0-4584-9cee-a1699780bbd3] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.003857023s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-243057 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.46s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-243057 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-243057 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.29770714s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-243057 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.46s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-243057 --alsologtostderr -v=3
E1002 21:00:32.272215  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/addons-660088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:00:37.629386  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/skaffold-796275/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-243057 --alsologtostderr -v=3: (11.300338035s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-243057 -n no-preload-243057
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-243057 -n no-preload-243057: exit status 7 (132.811786ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-243057 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (53.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-243057 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-243057 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1: (52.97326302s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-243057 -n no-preload-243057
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (53.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-274078 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [a56a2a97-70cc-490c-bdd0-ac4de1e51ac8] Pending
helpers_test.go:352: "busybox" [a56a2a97-70cc-490c-bdd0-ac4de1e51ac8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [a56a2a97-70cc-490c-bdd0-ac4de1e51ac8] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.007356441s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-274078 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-274078 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-274078 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (10.96s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-274078 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-274078 --alsologtostderr -v=3: (10.959137522s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (10.96s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-x4j5q" [17a2999e-c9e9-46ec-a7a4-6f1c06fe798e] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003811164s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-274078 -n embed-certs-274078
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-274078 -n embed-certs-274078: exit status 7 (84.697219ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-274078 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (57.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-274078 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-274078 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1: (57.275707298s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-274078 -n embed-certs-274078
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (57.98s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-x4j5q" [17a2999e-c9e9-46ec-a7a4-6f1c06fe798e] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003929641s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-243057 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-243057 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-243057 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p no-preload-243057 --alsologtostderr -v=1: (1.495728588s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-243057 -n no-preload-243057
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-243057 -n no-preload-243057: exit status 2 (544.841407ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-243057 -n no-preload-243057
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-243057 -n no-preload-243057: exit status 2 (658.172557ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-243057 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 unpause -p no-preload-243057 --alsologtostderr -v=1: (1.047269589s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-243057 -n no-preload-243057
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-243057 -n no-preload-243057
--- PASS: TestStartStop/group/no-preload/serial/Pause (5.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (75.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-388020 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1
E1002 21:01:55.351368  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/addons-660088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-388020 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1: (1m15.194812418s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (75.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-6jlzj" [5ca8399d-2875-4396-9ffe-1b48ece4c573] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004042794s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-6jlzj" [5ca8399d-2875-4396-9ffe-1b48ece4c573] Running
E1002 21:02:37.214391  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/old-k8s-version-657153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:02:37.220876  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/old-k8s-version-657153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:02:37.232340  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/old-k8s-version-657153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:02:37.253984  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/old-k8s-version-657153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:02:37.295468  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/old-k8s-version-657153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:02:37.376972  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/old-k8s-version-657153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:02:37.538678  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/old-k8s-version-657153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:02:37.860662  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/old-k8s-version-657153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:02:38.503057  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/old-k8s-version-657153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:02:39.784641  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/old-k8s-version-657153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:02:42.346904  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/old-k8s-version-657153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004348112s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-274078 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-274078 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-274078 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-274078 -n embed-certs-274078
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-274078 -n embed-certs-274078: exit status 2 (372.330576ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-274078 -n embed-certs-274078
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-274078 -n embed-certs-274078: exit status 2 (343.549792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-274078 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-274078 -n embed-certs-274078
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-274078 -n embed-certs-274078
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (40.72s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-174381 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1
E1002 21:02:53.629952  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/functional-460513/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:02:57.710933  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/old-k8s-version-657153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-174381 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1: (40.717609256s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (40.72s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.53s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-388020 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [c2116a25-4372-4096-bbb6-218e6e08efb3] Pending
helpers_test.go:352: "busybox" [c2116a25-4372-4096-bbb6-218e6e08efb3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [c2116a25-4372-4096-bbb6-218e6e08efb3] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.003605294s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-388020 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.53s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.5s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-388020 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1002 21:03:18.193314  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/old-k8s-version-657153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-388020 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.340144085s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-388020 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.50s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-388020 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-388020 --alsologtostderr -v=3: (11.276225698s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.72s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-174381 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-174381 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.719345541s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.72s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-388020 -n default-k8s-diff-port-388020
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-388020 -n default-k8s-diff-port-388020: exit status 7 (128.460016ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-388020 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (31.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-388020 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-388020 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1: (30.462008908s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-388020 -n default-k8s-diff-port-388020
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (31.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-174381 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-174381 --alsologtostderr -v=3: (11.353418145s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-174381 -n newest-cni-174381
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-174381 -n newest-cni-174381: exit status 7 (162.16835ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-174381 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.38s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (23.62s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-174381 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1
E1002 21:03:59.154642  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/old-k8s-version-657153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-174381 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1: (23.216347613s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-174381 -n newest-cni-174381
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (23.62s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (9s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-tc6hk" [e5ae9662-8f03-4c3d-8e22-eef12797a6da] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-tc6hk" [e5ae9662-8f03-4c3d-8e22-eef12797a6da] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 9.003252102s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (9.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-174381 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-174381 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-174381 -n newest-cni-174381
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-174381 -n newest-cni-174381: exit status 2 (354.716225ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-174381 -n newest-cni-174381
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-174381 -n newest-cni-174381: exit status 2 (354.192422ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-174381 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-174381 -n newest-cni-174381
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-174381 -n newest-cni-174381
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-tc6hk" [e5ae9662-8f03-4c3d-8e22-eef12797a6da] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003217914s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-388020 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (60.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-409884 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-409884 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (1m0.582179411s)
--- PASS: TestNetworkPlugins/group/auto/Start (60.58s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-388020 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.58s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-388020 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-388020 --alsologtostderr -v=1: (1.034417787s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-388020 -n default-k8s-diff-port-388020
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-388020 -n default-k8s-diff-port-388020: exit status 2 (365.361066ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-388020 -n default-k8s-diff-port-388020
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-388020 -n default-k8s-diff-port-388020: exit status 2 (321.524034ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-388020 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-388020 -n default-k8s-diff-port-388020
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-388020 -n default-k8s-diff-port-388020: (1.017788954s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-388020 -n default-k8s-diff-port-388020
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-388020 -n default-k8s-diff-port-388020: (1.015820497s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.58s)
E1002 21:12:05.661372  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/calico-409884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:12:05.668842  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/calico-409884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:12:05.680796  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/calico-409884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:12:05.703124  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/calico-409884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:12:05.744493  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/calico-409884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:12:05.825883  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/calico-409884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:12:05.987473  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/calico-409884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:12:06.309366  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/calico-409884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:12:06.950942  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/calico-409884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:12:08.232258  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/calico-409884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:12:10.793598  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/calico-409884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:12:15.915373  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/calico-409884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:12:17.425335  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/custom-flannel-409884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:12:17.431802  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/custom-flannel-409884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:12:17.443402  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/custom-flannel-409884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:12:17.465068  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/custom-flannel-409884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:12:17.506788  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/custom-flannel-409884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:12:17.588259  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/custom-flannel-409884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:12:17.749913  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/custom-flannel-409884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:12:18.071788  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/custom-flannel-409884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:12:18.714055  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/custom-flannel-409884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:12:19.995414  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/custom-flannel-409884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:12:22.557224  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/custom-flannel-409884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:12:26.157711  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/calico-409884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:12:27.679345  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/custom-flannel-409884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:12:37.214795  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/old-k8s-version-657153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:12:37.921339  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/custom-flannel-409884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (65.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-409884 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-409884 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (1m5.224340361s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (65.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-409884 "pgrep -a kubelet"
I1002 21:05:13.989590  882884 config.go:182] Loaded profile config "auto-409884": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-409884 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-sgt5l" [8459e397-118e-4de3-886a-6be964ad8146] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1002 21:05:14.788468  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/no-preload-243057/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:05:14.794998  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/no-preload-243057/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:05:14.806339  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/no-preload-243057/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:05:14.827656  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/no-preload-243057/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:05:14.869036  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/no-preload-243057/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:05:14.950423  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/no-preload-243057/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:05:15.111976  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/no-preload-243057/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:05:15.434880  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/no-preload-243057/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:05:16.076448  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/no-preload-243057/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:05:17.358709  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/no-preload-243057/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-sgt5l" [8459e397-118e-4de3-886a-6be964ad8146] Running
E1002 21:05:19.920912  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/no-preload-243057/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:05:21.076192  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/old-k8s-version-657153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004274435s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-409884 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-409884 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-409884 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E1002 21:05:25.046265  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/no-preload-243057/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-r7j99" [e26cc6de-b996-441f-bc47-8f7bf8efa1a1] Running
E1002 21:05:32.273357  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/addons-660088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005993813s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-409884 "pgrep -a kubelet"
I1002 21:05:34.757790  882884 config.go:182] Loaded profile config "kindnet-409884": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-409884 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-2fwq6" [96b8f809-897f-41f4-aa1c-7dcca8fddf4a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1002 21:05:35.288464  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/no-preload-243057/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:05:37.629508  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/skaffold-796275/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-2fwq6" [96b8f809-897f-41f4-aa1c-7dcca8fddf4a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.009912859s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-409884 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-409884 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-409884 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (77.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-409884 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
E1002 21:05:55.770269  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/no-preload-243057/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-409884 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m17.462654225s)
--- PASS: TestNetworkPlugins/group/calico/Start (77.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (65.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-409884 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
E1002 21:06:36.732032  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/no-preload-243057/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:07:00.701695  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/skaffold-796275/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-409884 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (1m5.670815148s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (65.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-nv4l5" [348f6142-0d8e-46a1-ab7f-8f7a2b5a776a] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-nv4l5" [348f6142-0d8e-46a1-ab7f-8f7a2b5a776a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004245524s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-409884 "pgrep -a kubelet"
I1002 21:07:12.042187  882884 config.go:182] Loaded profile config "calico-409884": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-409884 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-zhznx" [8d0fc408-c6ea-4f29-b501-95d4a400998a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-zhznx" [8d0fc408-c6ea-4f29-b501-95d4a400998a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.003523407s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-409884 "pgrep -a kubelet"
I1002 21:07:17.144090  882884 config.go:182] Loaded profile config "custom-flannel-409884": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-409884 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-g2kst" [50edd6cf-0096-4c92-a967-c6cd7eaa6eeb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-g2kst" [50edd6cf-0096-4c92-a967-c6cd7eaa6eeb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004167307s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-409884 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-409884 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-409884 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-409884 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-409884 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-409884 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (81.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p false-409884 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p false-409884 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (1m21.314487588s)
--- PASS: TestNetworkPlugins/group/false/Start (81.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (89.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-409884 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
E1002 21:07:58.653908  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/no-preload-243057/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:08:04.918509  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/old-k8s-version-657153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:08:06.994648  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/default-k8s-diff-port-388020/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:08:07.001310  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/default-k8s-diff-port-388020/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:08:07.012761  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/default-k8s-diff-port-388020/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:08:07.034156  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/default-k8s-diff-port-388020/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:08:07.076104  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/default-k8s-diff-port-388020/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:08:07.158438  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/default-k8s-diff-port-388020/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:08:07.319799  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/default-k8s-diff-port-388020/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:08:07.641783  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/default-k8s-diff-port-388020/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:08:08.283166  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/default-k8s-diff-port-388020/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:08:09.564415  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/default-k8s-diff-port-388020/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:08:12.126273  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/default-k8s-diff-port-388020/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:08:17.247976  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/default-k8s-diff-port-388020/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:08:27.491188  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/default-k8s-diff-port-388020/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:08:47.972461  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/default-k8s-diff-port-388020/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-409884 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (1m29.071965118s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (89.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p false-409884 "pgrep -a kubelet"
I1002 21:09:13.040430  882884 config.go:182] Loaded profile config "false-409884": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-409884 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-62vml" [822d1f0b-caf1-4b62-8c73-2e768580f204] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-62vml" [822d1f0b-caf1-4b62-8c73-2e768580f204] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 10.00322853s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-409884 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-409884 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-409884 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-409884 "pgrep -a kubelet"
I1002 21:09:27.257542  882884 config.go:182] Loaded profile config "enable-default-cni-409884": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-409884 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-chrcc" [b120f8e5-47df-4c47-9922-49624b998bc1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1002 21:09:28.933903  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/default-k8s-diff-port-388020/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-chrcc" [b120f8e5-47df-4c47-9922-49624b998bc1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.004753254s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-409884 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-409884 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-409884 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (57.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-409884 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-409884 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (57.74198922s)
--- PASS: TestNetworkPlugins/group/flannel/Start (57.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (80.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-409884 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
E1002 21:10:14.272700  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/auto-409884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:10:14.279020  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/auto-409884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:10:14.290348  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/auto-409884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:10:14.311674  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/auto-409884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:10:14.353024  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/auto-409884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:10:14.435631  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/auto-409884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:10:14.597162  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/auto-409884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:10:14.788159  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/no-preload-243057/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:10:14.919399  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/auto-409884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:10:15.561295  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/auto-409884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:10:16.842827  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/auto-409884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:10:19.404303  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/auto-409884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:10:24.526159  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/auto-409884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:10:28.274963  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/kindnet-409884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:10:28.281280  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/kindnet-409884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:10:28.292677  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/kindnet-409884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:10:28.314092  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/kindnet-409884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:10:28.355525  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/kindnet-409884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:10:28.437451  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/kindnet-409884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:10:28.598826  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/kindnet-409884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:10:28.921366  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/kindnet-409884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:10:29.563689  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/kindnet-409884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:10:30.845766  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/kindnet-409884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:10:32.271571  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/addons-660088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:10:33.407245  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/kindnet-409884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:10:34.767525  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/auto-409884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:10:37.630113  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/skaffold-796275/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:10:38.529370  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/kindnet-409884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:10:42.496203  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/no-preload-243057/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-409884 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (1m20.106803513s)
--- PASS: TestNetworkPlugins/group/bridge/Start (80.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-k5p74" [3c1004a6-b861-4680-993d-03c3c1eb397f] Running
E1002 21:10:48.771410  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/kindnet-409884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004278798s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-409884 "pgrep -a kubelet"
I1002 21:10:49.885501  882884 config.go:182] Loaded profile config "flannel-409884": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-409884 replace --force -f testdata/netcat-deployment.yaml
I1002 21:10:50.318953  882884 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-7rn6t" [a4abb3ae-8c15-4042-9ee6-96b1a403ab78] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1002 21:10:50.855789  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/default-k8s-diff-port-388020/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-7rn6t" [a4abb3ae-8c15-4042-9ee6-96b1a403ab78] Running
E1002 21:10:55.249576  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/auto-409884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004464856s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-409884 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-409884 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-409884 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (76.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kubenet-409884 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kubenet-409884 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (1m16.670571745s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (76.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-409884 "pgrep -a kubelet"
I1002 21:11:25.265282  882884 config.go:182] Loaded profile config "bridge-409884": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-409884 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-cssth" [c01965b9-e19c-4fc1-8318-0e2ad1659ae3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-cssth" [c01965b9-e19c-4fc1-8318-0e2ad1659ae3] Running
E1002 21:11:36.211182  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/auto-409884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.004145061s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-409884 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-409884 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-409884 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kubenet-409884 "pgrep -a kubelet"
I1002 21:12:40.035085  882884 config.go:182] Loaded profile config "kubenet-409884": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-409884 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-9hq25" [075fe72c-e376-452d-9238-0fbb8ed6c503] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-9hq25" [075fe72c-e376-452d-9238-0fbb8ed6c503] Running
E1002 21:12:46.639838  882884 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/calico-409884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 10.004277571s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-409884 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-409884 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-409884 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.15s)

                                                
                                    

Test skip (26/347)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.45s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-285021 --alsologtostderr --driver=docker  --container-runtime=docker
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-285021" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-285021
--- SKIP: TestDownloadOnlyKic (0.45s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-072161" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-072161
--- SKIP: TestStartStop/group/disable-driver-mounts (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-409884 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-409884

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-409884

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-409884

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-409884

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-409884

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-409884

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-409884

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-409884

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-409884

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-409884

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-409884" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409884"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-409884" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409884"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-409884" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409884"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-409884

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-409884" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409884"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-409884" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409884"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-409884" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-409884" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-409884" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-409884" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-409884" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-409884" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-409884" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-409884" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-409884" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409884"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-409884" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409884"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-409884" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409884"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-409884" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409884"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-409884" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409884"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-409884

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-409884

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-409884" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-409884" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-409884

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-409884

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-409884" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-409884" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-409884" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-409884" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-409884" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-409884" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409884"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-409884" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409884"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-409884" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409884"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-409884" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409884"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-409884" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409884"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21683-881023/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 02 Oct 2025 20:54:19 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: kubernetes-upgrade-870580
contexts:
- context:
cluster: kubernetes-upgrade-870580
extensions:
- extension:
last-update: Thu, 02 Oct 2025 20:54:19 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-870580
name: kubernetes-upgrade-870580
current-context: kubernetes-upgrade-870580
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-870580
user:
client-certificate: /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/kubernetes-upgrade-870580/client.crt
client-key: /home/jenkins/minikube-integration/21683-881023/.minikube/profiles/kubernetes-upgrade-870580/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-409884

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-409884" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409884"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-409884" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409884"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-409884" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409884"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-409884" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409884"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-409884" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409884"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-409884" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409884"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-409884" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409884"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-409884" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409884"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-409884" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409884"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-409884" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409884"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-409884" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409884"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-409884" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409884"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-409884" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409884"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-409884" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409884"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-409884" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409884"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-409884" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409884"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-409884" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409884"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-409884" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-409884"

                                                
                                                
----------------------- debugLogs end: cilium-409884 [took: 4.050552258s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-409884" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-409884
--- SKIP: TestNetworkPlugins/group/cilium (4.22s)

                                                
                                    
Copied to clipboard