Test Report: Docker_Linux_containerd_arm64 21830

                    
                      3aa0d58a4eff13dd9d5f058e659508fb4ffd2206:2025-11-01:42156
                    
                

Test fail (2/332)

Order failed test Duration
90 TestFunctional/parallel/DashboardCmd 302.51
256 TestKubernetesUpgrade 539.09
x
+
TestFunctional/parallel/DashboardCmd (302.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-269105 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-269105 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-269105 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-269105 --alsologtostderr -v=1] stderr:
I1101 10:53:10.406195 2885313 out.go:360] Setting OutFile to fd 1 ...
I1101 10:53:10.407643 2885313 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 10:53:10.407657 2885313 out.go:374] Setting ErrFile to fd 2...
I1101 10:53:10.407663 2885313 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 10:53:10.407987 2885313 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-2847530/.minikube/bin
I1101 10:53:10.408274 2885313 mustload.go:66] Loading cluster: functional-269105
I1101 10:53:10.408697 2885313 config.go:182] Loaded profile config "functional-269105": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1101 10:53:10.409155 2885313 cli_runner.go:164] Run: docker container inspect functional-269105 --format={{.State.Status}}
I1101 10:53:10.427185 2885313 host.go:66] Checking if "functional-269105" exists ...
I1101 10:53:10.427488 2885313 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1101 10:53:10.483420 2885313 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-01 10:53:10.474427151 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1101 10:53:10.483534 2885313 api_server.go:166] Checking apiserver status ...
I1101 10:53:10.483597 2885313 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1101 10:53:10.483638 2885313 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-269105
I1101 10:53:10.501497 2885313 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36806 SSHKeyPath:/home/jenkins/minikube-integration/21830-2847530/.minikube/machines/functional-269105/id_rsa Username:docker}
I1101 10:53:10.618203 2885313 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4797/cgroup
I1101 10:53:10.626252 2885313 api_server.go:182] apiserver freezer: "9:freezer:/docker/24a1cb67b38d4b1470e607d3e0af99a07b60c1f7ab1c1ac056af873df56f9224/kubepods/burstable/podc806d046dbcb3721a03bcba9e599052c/e51a5e843eeba1270878d73f1eec896c3fe87319d27a3d712d4daa31006c64e2"
I1101 10:53:10.626361 2885313 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/24a1cb67b38d4b1470e607d3e0af99a07b60c1f7ab1c1ac056af873df56f9224/kubepods/burstable/podc806d046dbcb3721a03bcba9e599052c/e51a5e843eeba1270878d73f1eec896c3fe87319d27a3d712d4daa31006c64e2/freezer.state
I1101 10:53:10.634951 2885313 api_server.go:204] freezer state: "THAWED"
I1101 10:53:10.634992 2885313 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
I1101 10:53:10.643316 2885313 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
ok
W1101 10:53:10.643361 2885313 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I1101 10:53:10.643547 2885313 config.go:182] Loaded profile config "functional-269105": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1101 10:53:10.643560 2885313 addons.go:70] Setting dashboard=true in profile "functional-269105"
I1101 10:53:10.643568 2885313 addons.go:239] Setting addon dashboard=true in "functional-269105"
I1101 10:53:10.643595 2885313 host.go:66] Checking if "functional-269105" exists ...
I1101 10:53:10.644034 2885313 cli_runner.go:164] Run: docker container inspect functional-269105 --format={{.State.Status}}
I1101 10:53:10.664778 2885313 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I1101 10:53:10.667809 2885313 out.go:179]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I1101 10:53:10.670583 2885313 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
I1101 10:53:10.670605 2885313 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I1101 10:53:10.670673 2885313 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-269105
I1101 10:53:10.688512 2885313 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36806 SSHKeyPath:/home/jenkins/minikube-integration/21830-2847530/.minikube/machines/functional-269105/id_rsa Username:docker}
I1101 10:53:10.799605 2885313 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I1101 10:53:10.799674 2885313 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I1101 10:53:10.819757 2885313 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I1101 10:53:10.819779 2885313 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I1101 10:53:10.835269 2885313 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I1101 10:53:10.835289 2885313 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I1101 10:53:10.850658 2885313 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
I1101 10:53:10.850705 2885313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I1101 10:53:10.865364 2885313 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
I1101 10:53:10.865405 2885313 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I1101 10:53:10.879225 2885313 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I1101 10:53:10.879267 2885313 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I1101 10:53:10.895774 2885313 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
I1101 10:53:10.895817 2885313 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I1101 10:53:10.909437 2885313 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
I1101 10:53:10.909483 2885313 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I1101 10:53:10.928111 2885313 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
I1101 10:53:10.928170 2885313 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I1101 10:53:10.948370 2885313 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1101 10:53:11.758001 2885313 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-269105 addons enable metrics-server

                                                
                                                
I1101 10:53:11.760845 2885313 addons.go:202] Writing out "functional-269105" config to set dashboard=true...
W1101 10:53:11.761134 2885313 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I1101 10:53:11.761776 2885313 kapi.go:59] client config for functional-269105: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/functional-269105/client.crt", KeyFile:"/home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/functional-269105/client.key", CAFile:"/home/jenkins/minikube-integration/21830-2847530/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21203d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1101 10:53:11.762314 2885313 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1101 10:53:11.762339 2885313 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1101 10:53:11.762347 2885313 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1101 10:53:11.762354 2885313 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1101 10:53:11.762358 2885313 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1101 10:53:11.781736 2885313 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  d6c4a0c2-3410-4e01-b475-fe1c09284579 798 0 2025-11-01 10:53:11 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-11-01 10:53:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.100.88.212,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.100.88.212],IPFamilies:[IPv4],AllocateLoadBalance
rNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W1101 10:53:11.781899 2885313 out.go:285] * Launching proxy ...
* Launching proxy ...
I1101 10:53:11.781973 2885313 dashboard.go:154] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-269105 proxy --port 36195]
I1101 10:53:11.782331 2885313 dashboard.go:159] Waiting for kubectl to output host:port ...
I1101 10:53:11.834222 2885313 dashboard.go:177] proxy stdout: Starting to serve on 127.0.0.1:36195
W1101 10:53:11.834274 2885313 out.go:285] * Verifying proxy health ...
* Verifying proxy health ...
I1101 10:53:11.866227 2885313 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[72b8fc1f-3426-45bb-b5e3-900cdad05f4f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:53:11 GMT]] Body:0x4000772e80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000282500 TLS:<nil>}
I1101 10:53:11.866309 2885313 retry.go:31] will retry after 71.178µs: Temporary Error: unexpected response code: 503
I1101 10:53:11.871082 2885313 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8b0f6482-e9e2-4a23-82c3-6d6eed07a0b3] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:53:11 GMT]] Body:0x4000772f00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000282640 TLS:<nil>}
I1101 10:53:11.871154 2885313 retry.go:31] will retry after 104.835µs: Temporary Error: unexpected response code: 503
I1101 10:53:11.875107 2885313 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ab0ea456-a8a1-449c-a25c-23e0ed5e4d1d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:53:11 GMT]] Body:0x4000772f80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000282780 TLS:<nil>}
I1101 10:53:11.875167 2885313 retry.go:31] will retry after 321.289µs: Temporary Error: unexpected response code: 503
I1101 10:53:11.878988 2885313 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[528c7557-3461-490c-86b2-7778d9baef68] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:53:11 GMT]] Body:0x4000773040 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40002828c0 TLS:<nil>}
I1101 10:53:11.879043 2885313 retry.go:31] will retry after 414.711µs: Temporary Error: unexpected response code: 503
I1101 10:53:11.882805 2885313 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[bc58630f-4344-4d7e-84b0-d5a214303a5e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:53:11 GMT]] Body:0x4000773140 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000282a00 TLS:<nil>}
I1101 10:53:11.882859 2885313 retry.go:31] will retry after 644.89µs: Temporary Error: unexpected response code: 503
I1101 10:53:11.886668 2885313 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ea3e75be-9d1d-4acc-833f-0fdaedd09f06] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:53:11 GMT]] Body:0x40007731c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000282b40 TLS:<nil>}
I1101 10:53:11.886741 2885313 retry.go:31] will retry after 762.562µs: Temporary Error: unexpected response code: 503
I1101 10:53:11.890615 2885313 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d46c9403-0234-4f57-9fb9-6da6493c68fc] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:53:11 GMT]] Body:0x4001668040 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40004aa640 TLS:<nil>}
I1101 10:53:11.890676 2885313 retry.go:31] will retry after 735.291µs: Temporary Error: unexpected response code: 503
I1101 10:53:11.895594 2885313 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2d61e390-6957-4c9b-927c-b03120fb6e70] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:53:11 GMT]] Body:0x40016680c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40004aaa00 TLS:<nil>}
I1101 10:53:11.895655 2885313 retry.go:31] will retry after 1.877097ms: Temporary Error: unexpected response code: 503
I1101 10:53:11.900752 2885313 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[92835e34-d21c-458d-a2a7-535b4c871e85] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:53:11 GMT]] Body:0x4001668140 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40004aab40 TLS:<nil>}
I1101 10:53:11.900817 2885313 retry.go:31] will retry after 3.68662ms: Temporary Error: unexpected response code: 503
I1101 10:53:11.907676 2885313 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c87c808a-87f1-4fc5-b1db-fc0b30fc2c94] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:53:11 GMT]] Body:0x40016681c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40004aac80 TLS:<nil>}
I1101 10:53:11.907736 2885313 retry.go:31] will retry after 5.451838ms: Temporary Error: unexpected response code: 503
I1101 10:53:11.916597 2885313 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[839855a8-20ec-47f7-81df-4346f0009d8f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:53:11 GMT]] Body:0x4001668240 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40004aadc0 TLS:<nil>}
I1101 10:53:11.916656 2885313 retry.go:31] will retry after 3.895171ms: Temporary Error: unexpected response code: 503
I1101 10:53:11.923526 2885313 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[238b46fd-de63-43be-843f-10f325c5bc81] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:53:11 GMT]] Body:0x4000773f00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000282c80 TLS:<nil>}
I1101 10:53:11.923586 2885313 retry.go:31] will retry after 6.918512ms: Temporary Error: unexpected response code: 503
I1101 10:53:11.933537 2885313 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5cce4808-47ef-4103-ab2b-d218948adf53] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:53:11 GMT]] Body:0x4001668380 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000282dc0 TLS:<nil>}
I1101 10:53:11.933596 2885313 retry.go:31] will retry after 13.673795ms: Temporary Error: unexpected response code: 503
I1101 10:53:11.950508 2885313 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5ea5802a-a01c-4181-abc4-0afb15a4ff78] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:53:11 GMT]] Body:0x40015f0040 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000282f00 TLS:<nil>}
I1101 10:53:11.950567 2885313 retry.go:31] will retry after 23.111872ms: Temporary Error: unexpected response code: 503
I1101 10:53:11.978216 2885313 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[af0b87b4-c1ac-4022-b5a7-69bd8f5f42ea] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:53:11 GMT]] Body:0x4001668480 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40004ab180 TLS:<nil>}
I1101 10:53:11.978296 2885313 retry.go:31] will retry after 38.245162ms: Temporary Error: unexpected response code: 503
I1101 10:53:12.020873 2885313 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0b4a9f21-3341-4371-91d7-ece3accd0fb7] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:53:12 GMT]] Body:0x4001668540 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000283040 TLS:<nil>}
I1101 10:53:12.020940 2885313 retry.go:31] will retry after 40.213563ms: Temporary Error: unexpected response code: 503
I1101 10:53:12.065340 2885313 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[16be7f64-a9fe-4be5-b6c2-aa73f1e7ed8d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:53:12 GMT]] Body:0x40015f0180 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000283180 TLS:<nil>}
I1101 10:53:12.065413 2885313 retry.go:31] will retry after 78.371596ms: Temporary Error: unexpected response code: 503
I1101 10:53:12.147537 2885313 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[69b66fae-1150-410f-99d5-b5f4d878c66e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:53:12 GMT]] Body:0x4001668640 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40004ab2c0 TLS:<nil>}
I1101 10:53:12.147601 2885313 retry.go:31] will retry after 73.326853ms: Temporary Error: unexpected response code: 503
I1101 10:53:12.226033 2885313 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[fe11373e-acb8-4a1f-8b42-b722e10320ef] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:53:12 GMT]] Body:0x40015f0280 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40002832c0 TLS:<nil>}
I1101 10:53:12.226115 2885313 retry.go:31] will retry after 75.369392ms: Temporary Error: unexpected response code: 503
I1101 10:53:12.305884 2885313 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[442fef44-31ec-4863-bc8f-7797aac3d329] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:53:12 GMT]] Body:0x40015f0300 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000283400 TLS:<nil>}
I1101 10:53:12.305942 2885313 retry.go:31] will retry after 255.926123ms: Temporary Error: unexpected response code: 503
I1101 10:53:12.566191 2885313 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[30af9a6d-7c74-4405-9d6a-521da8ce4b7a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:53:12 GMT]] Body:0x40015f0380 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000283540 TLS:<nil>}
I1101 10:53:12.566250 2885313 retry.go:31] will retry after 434.379889ms: Temporary Error: unexpected response code: 503
I1101 10:53:13.004865 2885313 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e92b6e10-5ab4-43e6-bc16-798a6717168b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:53:13 GMT]] Body:0x40015f0400 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000283680 TLS:<nil>}
I1101 10:53:13.004931 2885313 retry.go:31] will retry after 683.564587ms: Temporary Error: unexpected response code: 503
I1101 10:53:13.691321 2885313 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f4819e1a-1032-4754-99e0-592ef0414000] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:53:13 GMT]] Body:0x40016688c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40004ab400 TLS:<nil>}
I1101 10:53:13.691387 2885313 retry.go:31] will retry after 456.152727ms: Temporary Error: unexpected response code: 503
I1101 10:53:14.151064 2885313 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b7505ef2-2263-4ab5-bc2d-7ef17c7cc44c] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:53:14 GMT]] Body:0x4001668980 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40002837c0 TLS:<nil>}
I1101 10:53:14.151126 2885313 retry.go:31] will retry after 930.392924ms: Temporary Error: unexpected response code: 503
I1101 10:53:15.084904 2885313 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a9216e64-24aa-4404-9dcf-4b7a4831f536] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:53:15 GMT]] Body:0x40015f0580 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40004ab680 TLS:<nil>}
I1101 10:53:15.084979 2885313 retry.go:31] will retry after 2.485343078s: Temporary Error: unexpected response code: 503
I1101 10:53:17.573548 2885313 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[db8697ab-d114-4a28-80c4-0697f8fb4432] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:53:17 GMT]] Body:0x4001668a80 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000283900 TLS:<nil>}
I1101 10:53:17.573613 2885313 retry.go:31] will retry after 3.172164253s: Temporary Error: unexpected response code: 503
I1101 10:53:20.751076 2885313 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d59c3c95-1f08-404e-ba3c-9d1e062f1b46] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:53:20 GMT]] Body:0x4001668b40 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000283a40 TLS:<nil>}
I1101 10:53:20.751141 2885313 retry.go:31] will retry after 3.659852909s: Temporary Error: unexpected response code: 503
I1101 10:53:24.414974 2885313 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8f0a8229-51c6-403c-b65e-08ac46fabcba] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:53:24 GMT]] Body:0x4001668c00 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000283b80 TLS:<nil>}
I1101 10:53:24.415047 2885313 retry.go:31] will retry after 3.413096743s: Temporary Error: unexpected response code: 503
I1101 10:53:27.833568 2885313 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0182b091-7e8b-4d15-a673-e0a7758a5a84] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:53:27 GMT]] Body:0x40015f0700 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40004abcc0 TLS:<nil>}
I1101 10:53:27.833628 2885313 retry.go:31] will retry after 4.404351659s: Temporary Error: unexpected response code: 503
I1101 10:53:32.241838 2885313 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[cb73cbea-fe9f-40f9-a58c-e47b0b32db90] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:53:32 GMT]] Body:0x40015f07c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40004abe00 TLS:<nil>}
I1101 10:53:32.241901 2885313 retry.go:31] will retry after 10.966516708s: Temporary Error: unexpected response code: 503
I1101 10:53:43.211269 2885313 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b7e757c8-9354-4ee5-be24-30f4881ac811] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:53:43 GMT]] Body:0x4001668d40 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000283cc0 TLS:<nil>}
I1101 10:53:43.211333 2885313 retry.go:31] will retry after 22.575667464s: Temporary Error: unexpected response code: 503
I1101 10:54:05.790639 2885313 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9603fdff-5249-4725-8024-51e01a8e7070] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:54:05 GMT]] Body:0x40015f08c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400032b400 TLS:<nil>}
I1101 10:54:05.790711 2885313 retry.go:31] will retry after 17.288476517s: Temporary Error: unexpected response code: 503
I1101 10:54:23.082188 2885313 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0db97852-564a-4cab-9a87-475a7c2aae1e] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:54:23 GMT]] Body:0x4001668e40 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400032b540 TLS:<nil>}
I1101 10:54:23.082251 2885313 retry.go:31] will retry after 30.078353988s: Temporary Error: unexpected response code: 503
I1101 10:54:53.164988 2885313 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d42d67f5-a698-4086-93a8-75c7e3ec465e] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:54:53 GMT]] Body:0x40015f09c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400032b680 TLS:<nil>}
I1101 10:54:53.165048 2885313 retry.go:31] will retry after 1m22.959111076s: Temporary Error: unexpected response code: 503
I1101 10:56:16.127468 2885313 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8bc1fd85-8cb8-406f-8942-a9848ab0f4db] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:56:16 GMT]] Body:0x40015f0080 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400032b7c0 TLS:<nil>}
I1101 10:56:16.127535 2885313 retry.go:31] will retry after 1m2.273111896s: Temporary Error: unexpected response code: 503
I1101 10:57:18.404658 2885313 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0079bbe3-35d4-4060-b9be-c9e0c15865c5] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:57:18 GMT]] Body:0x40016680c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400032b900 TLS:<nil>}
I1101 10:57:18.404731 2885313 retry.go:31] will retry after 37.387614013s: Temporary Error: unexpected response code: 503
I1101 10:57:55.796386 2885313 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a2102068-f986-4bda-9a62-43dd4e24a6a3] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:57:55 GMT]] Body:0x40015f0140 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400032ba40 TLS:<nil>}
I1101 10:57:55.796465 2885313 retry.go:31] will retry after 31.004224405s: Temporary Error: unexpected response code: 503
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-269105
helpers_test.go:243: (dbg) docker inspect functional-269105:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "24a1cb67b38d4b1470e607d3e0af99a07b60c1f7ab1c1ac056af873df56f9224",
	        "Created": "2025-11-01T10:50:44.925723589Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2875010,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T10:50:44.999173611Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/24a1cb67b38d4b1470e607d3e0af99a07b60c1f7ab1c1ac056af873df56f9224/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/24a1cb67b38d4b1470e607d3e0af99a07b60c1f7ab1c1ac056af873df56f9224/hostname",
	        "HostsPath": "/var/lib/docker/containers/24a1cb67b38d4b1470e607d3e0af99a07b60c1f7ab1c1ac056af873df56f9224/hosts",
	        "LogPath": "/var/lib/docker/containers/24a1cb67b38d4b1470e607d3e0af99a07b60c1f7ab1c1ac056af873df56f9224/24a1cb67b38d4b1470e607d3e0af99a07b60c1f7ab1c1ac056af873df56f9224-json.log",
	        "Name": "/functional-269105",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-269105:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-269105",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "24a1cb67b38d4b1470e607d3e0af99a07b60c1f7ab1c1ac056af873df56f9224",
	                "LowerDir": "/var/lib/docker/overlay2/50ed4e506a20c8539dad8bf357af86d13d6e0b1038e2fdb0c85fac0d21b181ec-init/diff:/var/lib/docker/overlay2/6ccbdc4e59211c61d83d46bc353aa66c1a8dd6bb2f77e16ffc85d068d750bbe6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/50ed4e506a20c8539dad8bf357af86d13d6e0b1038e2fdb0c85fac0d21b181ec/merged",
	                "UpperDir": "/var/lib/docker/overlay2/50ed4e506a20c8539dad8bf357af86d13d6e0b1038e2fdb0c85fac0d21b181ec/diff",
	                "WorkDir": "/var/lib/docker/overlay2/50ed4e506a20c8539dad8bf357af86d13d6e0b1038e2fdb0c85fac0d21b181ec/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-269105",
	                "Source": "/var/lib/docker/volumes/functional-269105/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-269105",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-269105",
	                "name.minikube.sigs.k8s.io": "functional-269105",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2538e1277595786df7322eb87eed1ac089387f33cba65ce30c44c5c638511e7a",
	            "SandboxKey": "/var/run/docker/netns/2538e1277595",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36806"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36807"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36810"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36808"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36809"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-269105": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a2:96:6b:6a:f0:78",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "abad2735c74ff3fa0465945d7ef9b035766ef74981eab5752e96fb447c0a5f1c",
	                    "EndpointID": "d44c9fd95c440eda4875ce4016a71e802cd53b293953d2f843ca4205ca9bfc95",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-269105",
	                        "24a1cb67b38d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-269105 -n functional-269105
helpers_test.go:252: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-269105 logs -n 25: (1.473646905s)
helpers_test.go:260: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-269105 image load --daemon kicbase/echo-server:functional-269105 --alsologtostderr                                                                   │ functional-269105 │ jenkins │ v1.37.0 │ 01 Nov 25 10:53 UTC │ 01 Nov 25 10:53 UTC │
	│ image          │ functional-269105 image ls                                                                                                                                      │ functional-269105 │ jenkins │ v1.37.0 │ 01 Nov 25 10:53 UTC │ 01 Nov 25 10:53 UTC │
	│ image          │ functional-269105 image save kicbase/echo-server:functional-269105 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-269105 │ jenkins │ v1.37.0 │ 01 Nov 25 10:53 UTC │ 01 Nov 25 10:53 UTC │
	│ image          │ functional-269105 image rm kicbase/echo-server:functional-269105 --alsologtostderr                                                                              │ functional-269105 │ jenkins │ v1.37.0 │ 01 Nov 25 10:53 UTC │ 01 Nov 25 10:53 UTC │
	│ image          │ functional-269105 image ls                                                                                                                                      │ functional-269105 │ jenkins │ v1.37.0 │ 01 Nov 25 10:53 UTC │ 01 Nov 25 10:53 UTC │
	│ image          │ functional-269105 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-269105 │ jenkins │ v1.37.0 │ 01 Nov 25 10:53 UTC │ 01 Nov 25 10:53 UTC │
	│ image          │ functional-269105 image ls                                                                                                                                      │ functional-269105 │ jenkins │ v1.37.0 │ 01 Nov 25 10:53 UTC │ 01 Nov 25 10:53 UTC │
	│ image          │ functional-269105 image save --daemon kicbase/echo-server:functional-269105 --alsologtostderr                                                                   │ functional-269105 │ jenkins │ v1.37.0 │ 01 Nov 25 10:53 UTC │ 01 Nov 25 10:53 UTC │
	│ ssh            │ functional-269105 ssh sudo cat /etc/test/nested/copy/2849422/hosts                                                                                              │ functional-269105 │ jenkins │ v1.37.0 │ 01 Nov 25 10:53 UTC │ 01 Nov 25 10:53 UTC │
	│ ssh            │ functional-269105 ssh sudo cat /etc/ssl/certs/2849422.pem                                                                                                       │ functional-269105 │ jenkins │ v1.37.0 │ 01 Nov 25 10:53 UTC │ 01 Nov 25 10:53 UTC │
	│ ssh            │ functional-269105 ssh sudo cat /usr/share/ca-certificates/2849422.pem                                                                                           │ functional-269105 │ jenkins │ v1.37.0 │ 01 Nov 25 10:53 UTC │ 01 Nov 25 10:53 UTC │
	│ ssh            │ functional-269105 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                        │ functional-269105 │ jenkins │ v1.37.0 │ 01 Nov 25 10:53 UTC │ 01 Nov 25 10:53 UTC │
	│ ssh            │ functional-269105 ssh sudo cat /etc/ssl/certs/28494222.pem                                                                                                      │ functional-269105 │ jenkins │ v1.37.0 │ 01 Nov 25 10:53 UTC │ 01 Nov 25 10:53 UTC │
	│ ssh            │ functional-269105 ssh sudo cat /usr/share/ca-certificates/28494222.pem                                                                                          │ functional-269105 │ jenkins │ v1.37.0 │ 01 Nov 25 10:53 UTC │ 01 Nov 25 10:53 UTC │
	│ ssh            │ functional-269105 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                        │ functional-269105 │ jenkins │ v1.37.0 │ 01 Nov 25 10:53 UTC │ 01 Nov 25 10:53 UTC │
	│ image          │ functional-269105 image ls --format short --alsologtostderr                                                                                                     │ functional-269105 │ jenkins │ v1.37.0 │ 01 Nov 25 10:53 UTC │ 01 Nov 25 10:53 UTC │
	│ update-context │ functional-269105 update-context --alsologtostderr -v=2                                                                                                         │ functional-269105 │ jenkins │ v1.37.0 │ 01 Nov 25 10:53 UTC │ 01 Nov 25 10:53 UTC │
	│ ssh            │ functional-269105 ssh pgrep buildkitd                                                                                                                           │ functional-269105 │ jenkins │ v1.37.0 │ 01 Nov 25 10:53 UTC │                     │
	│ image          │ functional-269105 image build -t localhost/my-image:functional-269105 testdata/build --alsologtostderr                                                          │ functional-269105 │ jenkins │ v1.37.0 │ 01 Nov 25 10:53 UTC │ 01 Nov 25 10:53 UTC │
	│ image          │ functional-269105 image ls                                                                                                                                      │ functional-269105 │ jenkins │ v1.37.0 │ 01 Nov 25 10:53 UTC │ 01 Nov 25 10:53 UTC │
	│ image          │ functional-269105 image ls --format yaml --alsologtostderr                                                                                                      │ functional-269105 │ jenkins │ v1.37.0 │ 01 Nov 25 10:53 UTC │ 01 Nov 25 10:53 UTC │
	│ image          │ functional-269105 image ls --format json --alsologtostderr                                                                                                      │ functional-269105 │ jenkins │ v1.37.0 │ 01 Nov 25 10:53 UTC │ 01 Nov 25 10:53 UTC │
	│ image          │ functional-269105 image ls --format table --alsologtostderr                                                                                                     │ functional-269105 │ jenkins │ v1.37.0 │ 01 Nov 25 10:53 UTC │ 01 Nov 25 10:53 UTC │
	│ update-context │ functional-269105 update-context --alsologtostderr -v=2                                                                                                         │ functional-269105 │ jenkins │ v1.37.0 │ 01 Nov 25 10:53 UTC │ 01 Nov 25 10:53 UTC │
	│ update-context │ functional-269105 update-context --alsologtostderr -v=2                                                                                                         │ functional-269105 │ jenkins │ v1.37.0 │ 01 Nov 25 10:53 UTC │ 01 Nov 25 10:53 UTC │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:53:10
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:53:10.092170 2885162 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:53:10.092390 2885162 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:53:10.092413 2885162 out.go:374] Setting ErrFile to fd 2...
	I1101 10:53:10.092434 2885162 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:53:10.092747 2885162 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-2847530/.minikube/bin
	I1101 10:53:10.093161 2885162 out.go:368] Setting JSON to false
	I1101 10:53:10.094206 2885162 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":70536,"bootTime":1761923854,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1101 10:53:10.094329 2885162 start.go:143] virtualization:  
	I1101 10:53:10.097765 2885162 out.go:179] * [functional-269105] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 10:53:10.100904 2885162 out.go:179]   - MINIKUBE_LOCATION=21830
	I1101 10:53:10.100975 2885162 notify.go:221] Checking for updates...
	I1101 10:53:10.106866 2885162 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:53:10.109873 2885162 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21830-2847530/kubeconfig
	I1101 10:53:10.113422 2885162 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-2847530/.minikube
	I1101 10:53:10.116395 2885162 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 10:53:10.119271 2885162 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:53:10.122573 2885162 config.go:182] Loaded profile config "functional-269105": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1101 10:53:10.123146 2885162 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:53:10.151134 2885162 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 10:53:10.151239 2885162 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:53:10.231673 2885162 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-01 10:53:10.22209804 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:53:10.231778 2885162 docker.go:319] overlay module found
	I1101 10:53:10.235202 2885162 out.go:179] * Using the docker driver based on existing profile
	I1101 10:53:10.238334 2885162 start.go:309] selected driver: docker
	I1101 10:53:10.238355 2885162 start.go:930] validating driver "docker" against &{Name:functional-269105 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-269105 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:53:10.238453 2885162 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:53:10.238571 2885162 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:53:10.313341 2885162 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-01 10:53:10.303701836 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:53:10.313765 2885162 cni.go:84] Creating CNI manager for ""
	I1101 10:53:10.313826 2885162 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1101 10:53:10.313879 2885162 start.go:353] cluster config:
	{Name:functional-269105 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-269105 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:53:10.317647 2885162 out.go:179] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	c5ce950fccd37       1611cd07b61d5       5 minutes ago       Exited              mount-munger              0                   4dd0baebf0726       busybox-mount                               default
	f5190b42b4a13       ce2d2cda2d858       5 minutes ago       Running             echo-server               0                   316e15de3b5bf       hello-node-75c85bcc94-r6zbp                 default
	2074895f39dce       46fabdd7f288c       5 minutes ago       Running             myfrontend                0                   fc327f3b62760       sp-pod                                      default
	e002ff4226b22       ce2d2cda2d858       5 minutes ago       Running             echo-server               0                   e7541b814d11e       hello-node-connect-7d85dfc575-nfspr         default
	a1d1a8f616e8f       cbad6347cca28       5 minutes ago       Running             nginx                     0                   89487537068a3       nginx-svc                                   default
	5fe425c496959       ba04bb24b9575       5 minutes ago       Running             storage-provisioner       2                   7e48d6f447efa       storage-provisioner                         kube-system
	e51a5e843eeba       43911e833d64d       6 minutes ago       Running             kube-apiserver            0                   b271ca0a3d8f1       kube-apiserver-functional-269105            kube-system
	fa15b362c02f8       7eb2c6ff0c5a7       6 minutes ago       Running             kube-controller-manager   2                   c32008206199f       kube-controller-manager-functional-269105   kube-system
	365bcd5f34ee3       a1894772a478e       6 minutes ago       Running             etcd                      1                   6b883b8e1ec4e       etcd-functional-269105                      kube-system
	91c3f7ce6c557       ba04bb24b9575       6 minutes ago       Exited              storage-provisioner       1                   7e48d6f447efa       storage-provisioner                         kube-system
	03b2b4b635c97       138784d87c9c5       6 minutes ago       Running             coredns                   1                   4c1b03c734999       coredns-66bc5c9577-crvrc                    kube-system
	14ba992862f8d       05baa95f5142d       6 minutes ago       Running             kube-proxy                1                   d1ed1e61fa70c       kube-proxy-mwwf8                            kube-system
	2ebe0ddeb3952       b1a8c6f707935       6 minutes ago       Running             kindnet-cni               1                   bafc54f24528b       kindnet-fz7g5                               kube-system
	6b6a8c316335b       7eb2c6ff0c5a7       6 minutes ago       Exited              kube-controller-manager   1                   c32008206199f       kube-controller-manager-functional-269105   kube-system
	e27b4d294d5a2       b5f57ec6b9867       6 minutes ago       Running             kube-scheduler            1                   7b8367ea850bf       kube-scheduler-functional-269105            kube-system
	8e8aa255cf006       138784d87c9c5       6 minutes ago       Exited              coredns                   0                   4c1b03c734999       coredns-66bc5c9577-crvrc                    kube-system
	adf77118d73f2       b1a8c6f707935       6 minutes ago       Exited              kindnet-cni               0                   bafc54f24528b       kindnet-fz7g5                               kube-system
	baed3d2fb8f06       05baa95f5142d       6 minutes ago       Exited              kube-proxy                0                   d1ed1e61fa70c       kube-proxy-mwwf8                            kube-system
	60cd23f465992       b5f57ec6b9867       7 minutes ago       Exited              kube-scheduler            0                   7b8367ea850bf       kube-scheduler-functional-269105            kube-system
	c0dd00cac2ad0       a1894772a478e       7 minutes ago       Exited              etcd                      0                   6b883b8e1ec4e       etcd-functional-269105                      kube-system
	
	
	==> containerd <==
	Nov 01 10:53:52 functional-269105 containerd[3607]: time="2025-11-01T10:53:52.174799693Z" level=info msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	Nov 01 10:53:52 functional-269105 containerd[3607]: time="2025-11-01T10:53:52.177467546Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Nov 01 10:53:52 functional-269105 containerd[3607]: time="2025-11-01T10:53:52.287931259Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Nov 01 10:53:52 functional-269105 containerd[3607]: time="2025-11-01T10:53:52.569837728Z" level=error msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Nov 01 10:53:52 functional-269105 containerd[3607]: time="2025-11-01T10:53:52.569950069Z" level=info msg="stop pulling image docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: active requests=0, bytes read=11047"
	Nov 01 10:54:42 functional-269105 containerd[3607]: time="2025-11-01T10:54:42.177869821Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Nov 01 10:54:42 functional-269105 containerd[3607]: time="2025-11-01T10:54:42.181552546Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Nov 01 10:54:42 functional-269105 containerd[3607]: time="2025-11-01T10:54:42.311725009Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Nov 01 10:54:42 functional-269105 containerd[3607]: time="2025-11-01T10:54:42.612548806Z" level=error msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Nov 01 10:54:42 functional-269105 containerd[3607]: time="2025-11-01T10:54:42.612593605Z" level=info msg="stop pulling image docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: active requests=0, bytes read=11015"
	Nov 01 10:54:42 functional-269105 containerd[3607]: time="2025-11-01T10:54:42.613940832Z" level=info msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	Nov 01 10:54:42 functional-269105 containerd[3607]: time="2025-11-01T10:54:42.616281088Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Nov 01 10:54:42 functional-269105 containerd[3607]: time="2025-11-01T10:54:42.749275288Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Nov 01 10:54:43 functional-269105 containerd[3607]: time="2025-11-01T10:54:43.014974292Z" level=error msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Nov 01 10:54:43 functional-269105 containerd[3607]: time="2025-11-01T10:54:43.015036395Z" level=info msg="stop pulling image docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: active requests=0, bytes read=11047"
	Nov 01 10:56:07 functional-269105 containerd[3607]: time="2025-11-01T10:56:07.175156238Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Nov 01 10:56:07 functional-269105 containerd[3607]: time="2025-11-01T10:56:07.177554454Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Nov 01 10:56:07 functional-269105 containerd[3607]: time="2025-11-01T10:56:07.307896191Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Nov 01 10:56:07 functional-269105 containerd[3607]: time="2025-11-01T10:56:07.717531547Z" level=error msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Nov 01 10:56:07 functional-269105 containerd[3607]: time="2025-11-01T10:56:07.717641156Z" level=info msg="stop pulling image docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: active requests=0, bytes read=12709"
	Nov 01 10:56:13 functional-269105 containerd[3607]: time="2025-11-01T10:56:13.174916987Z" level=info msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	Nov 01 10:56:13 functional-269105 containerd[3607]: time="2025-11-01T10:56:13.177309378Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Nov 01 10:56:13 functional-269105 containerd[3607]: time="2025-11-01T10:56:13.316584510Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Nov 01 10:56:13 functional-269105 containerd[3607]: time="2025-11-01T10:56:13.618061555Z" level=error msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Nov 01 10:56:13 functional-269105 containerd[3607]: time="2025-11-01T10:56:13.618172682Z" level=info msg="stop pulling image docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: active requests=0, bytes read=11047"
	
	
	==> coredns [03b2b4b635c97f892ebb1f38bf0ad8ae8742a295ad6f2d4509118b91d8482940] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:33051 - 55519 "HINFO IN 7014377530121784897.6476830027086058127. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021658161s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [8e8aa255cf0060a99c2e47d38779c19dd5993322788ecf71fee0779181966448] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37081 - 27007 "HINFO IN 6597586311678208152.2332564119826185821. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.028154742s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-269105
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-269105
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845
	                    minikube.k8s.io/name=functional-269105
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T10_51_11_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 10:51:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-269105
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 10:58:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 10:57:51 +0000   Sat, 01 Nov 2025 10:51:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 10:57:51 +0000   Sat, 01 Nov 2025 10:51:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 10:57:51 +0000   Sat, 01 Nov 2025 10:51:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 10:57:51 +0000   Sat, 01 Nov 2025 10:51:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-269105
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                f6014abf-b080-49ee-aa0d-a14a82ee2829
	  Boot ID:                    eebecd53-57fd-46e5-aa39-103fca906436
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.28
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-r6zbp                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m13s
	  default                     hello-node-connect-7d85dfc575-nfspr           0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m22s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m31s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m13s
	  kube-system                 coredns-66bc5c9577-crvrc                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     6m56s
	  kube-system                 etcd-functional-269105                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         7m1s
	  kube-system                 kindnet-fz7g5                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      6m56s
	  kube-system                 kube-apiserver-functional-269105              250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m57s
	  kube-system                 kube-controller-manager-functional-269105     200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m1s
	  kube-system                 kube-proxy-mwwf8                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m56s
	  kube-system                 kube-scheduler-functional-269105              100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m2s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m55s
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-cjngd    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-bcspl         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 6m54s                kube-proxy       
	  Normal   Starting                 5m56s                kube-proxy       
	  Normal   Starting                 7m9s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 7m9s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  7m9s (x8 over 7m9s)  kubelet          Node functional-269105 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m9s (x8 over 7m9s)  kubelet          Node functional-269105 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m9s (x7 over 7m9s)  kubelet          Node functional-269105 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  7m9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeAllocatableEnforced  7m1s                 kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 7m1s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  7m1s                 kubelet          Node functional-269105 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m1s                 kubelet          Node functional-269105 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m1s                 kubelet          Node functional-269105 status is now: NodeHasSufficientPID
	  Normal   Starting                 7m1s                 kubelet          Starting kubelet.
	  Normal   RegisteredNode           6m57s                node-controller  Node functional-269105 event: Registered Node functional-269105 in Controller
	  Normal   NodeReady                6m45s                kubelet          Node functional-269105 status is now: NodeReady
	  Normal   Starting                 6m1s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m1s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  6m1s (x8 over 6m1s)  kubelet          Node functional-269105 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m1s (x8 over 6m1s)  kubelet          Node functional-269105 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m1s (x7 over 6m1s)  kubelet          Node functional-269105 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  6m1s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           5m54s                node-controller  Node functional-269105 event: Registered Node functional-269105 in Controller
	
	
	==> dmesg <==
	[Nov 1 09:26] overlayfs: idmapped layers are currently not supported
	[  +0.217637] overlayfs: idmapped layers are currently not supported
	[ +42.063471] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:28] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:29] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:30] overlayfs: idmapped layers are currently not supported
	[ +22.794250] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:31] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:32] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:33] overlayfs: idmapped layers are currently not supported
	[ +18.806441] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:34] overlayfs: idmapped layers are currently not supported
	[ +47.017810] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:35] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:36] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:37] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:38] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:39] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:40] overlayfs: idmapped layers are currently not supported
	[Nov 1 09:42] kauditd_printk_skb: 8 callbacks suppressed
	[Nov 1 10:42] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [365bcd5f34ee351f9166d5c6e420daa4dc4c09cea3b62f698486c9b8d7beace5] <==
	{"level":"warn","ts":"2025-11-01T10:52:12.645395Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:12.660361Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:12.676630Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:12.694103Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:12.717242Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:12.733672Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:12.747768Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:12.763710Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:12.779378Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:12.802787Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:12.815632Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:12.831962Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:12.844904Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:12.859904Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:12.876214Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:12.892698Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:12.912691Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:12.924929Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:12.949239Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:12.956919Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:12.977665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:13.005423Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:13.019111Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:13.034312Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:52:13.103882Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43646","server-name":"","error":"EOF"}
	
	
	==> etcd [c0dd00cac2ad07744f2cb1c2bdced881cf3e162716ee5c949fb36b9c6d2896eb] <==
	{"level":"warn","ts":"2025-11-01T10:51:06.152321Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:51:06.182625Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:51:06.236865Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:51:06.273523Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:51:06.297402Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:51:06.324402Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:51:06.442777Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42926","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-01T10:52:06.238841Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-01T10:52:06.238905Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-269105","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-11-01T10:52:06.239011Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-01T10:52:06.240533Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-01T10:52:06.241966Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T10:52:06.242016Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"warn","ts":"2025-11-01T10:52:06.242030Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-01T10:52:06.242079Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-11-01T10:52:06.242081Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"error","ts":"2025-11-01T10:52:06.242088Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T10:52:06.242094Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-11-01T10:52:06.242154Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-01T10:52:06.242164Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-01T10:52:06.242171Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T10:52:06.245367Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-11-01T10:52:06.245444Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T10:52:06.245478Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-11-01T10:52:06.245489Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-269105","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 10:58:11 up 19:40,  0 user,  load average: 0.35, 1.10, 2.38
	Linux functional-269105 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [2ebe0ddeb3952a0f5601697fe61194fc062cb227c6382ea92df14f78f7317c45] <==
	I1101 10:56:07.249711       1 main.go:301] handling current node
	I1101 10:56:17.249658       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 10:56:17.249890       1 main.go:301] handling current node
	I1101 10:56:27.249628       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 10:56:27.249661       1 main.go:301] handling current node
	I1101 10:56:37.249311       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 10:56:37.249345       1 main.go:301] handling current node
	I1101 10:56:47.249682       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 10:56:47.249719       1 main.go:301] handling current node
	I1101 10:56:57.252064       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 10:56:57.252161       1 main.go:301] handling current node
	I1101 10:57:07.251534       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 10:57:07.251567       1 main.go:301] handling current node
	I1101 10:57:17.249957       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 10:57:17.249996       1 main.go:301] handling current node
	I1101 10:57:27.250513       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 10:57:27.250762       1 main.go:301] handling current node
	I1101 10:57:37.251948       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 10:57:37.252089       1 main.go:301] handling current node
	I1101 10:57:47.250987       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 10:57:47.251084       1 main.go:301] handling current node
	I1101 10:57:57.252873       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 10:57:57.253110       1 main.go:301] handling current node
	I1101 10:58:07.251906       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 10:58:07.251943       1 main.go:301] handling current node
	
	
	==> kindnet [adf77118d73f21068b5b815694e02e44b813306fb85927952fbc2cec23152555] <==
	I1101 10:51:16.659316       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 10:51:16.659700       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1101 10:51:16.659875       1 main.go:148] setting mtu 1500 for CNI 
	I1101 10:51:16.659889       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 10:51:16.659904       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T10:51:16Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 10:51:16.864970       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 10:51:16.865079       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 10:51:16.865145       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 10:51:16.865410       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1101 10:51:17.148740       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 10:51:17.148770       1 metrics.go:72] Registering metrics
	I1101 10:51:17.148821       1 controller.go:711] "Syncing nftables rules"
	I1101 10:51:26.863441       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 10:51:26.863497       1 main.go:301] handling current node
	I1101 10:51:36.866703       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 10:51:36.866738       1 main.go:301] handling current node
	I1101 10:51:46.864936       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 10:51:46.864975       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e51a5e843eeba1270878d73f1eec896c3fe87319d27a3d712d4daa31006c64e2] <==
	I1101 10:52:13.865901       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 10:52:13.865908       1 cache.go:39] Caches are synced for autoregister controller
	I1101 10:52:13.866056       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1101 10:52:13.871957       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:52:13.879157       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1101 10:52:13.879247       1 policy_source.go:240] refreshing policies
	I1101 10:52:13.930686       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 10:52:14.260379       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 10:52:14.624761       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1101 10:52:14.955386       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1101 10:52:14.956965       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 10:52:14.962709       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 10:52:15.712465       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1101 10:52:15.868962       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 10:52:15.961004       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 10:52:15.968543       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 10:52:17.564587       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 10:52:33.731184       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.96.50.1"}
	I1101 10:52:40.592559       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.101.197.7"}
	I1101 10:52:49.333102       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.108.248.51"}
	I1101 10:52:59.021077       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.105.244.249"}
	E1101 10:53:04.810817       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:36702: use of closed network connection
	I1101 10:53:11.461222       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 10:53:11.726369       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.88.212"}
	I1101 10:53:11.749500       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.213.84"}
	
	
	==> kube-controller-manager [6b6a8c316335bb5ca5f7ee0a54417b6beafa45ff0afc2bca844650a9216fae0a] <==
	I1101 10:51:58.614810       1 serving.go:386] Generated self-signed cert in-memory
	I1101 10:51:59.501595       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1101 10:51:59.501623       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:51:59.503448       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1101 10:51:59.503945       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 10:51:59.504023       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1101 10:51:59.504132       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E1101 10:52:09.505223       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8441/healthz\": dial tcp 192.168.49.2:8441: connect: connection refused"
	
	
	==> kube-controller-manager [fa15b362c02f8caa5d7f3bac3d179d62ea47d6a52a47e7a6ccf0d55e83580696] <==
	I1101 10:52:17.184067       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1101 10:52:17.186517       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 10:52:17.187666       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1101 10:52:17.190627       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 10:52:17.193825       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1101 10:52:17.196035       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1101 10:52:17.199250       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1101 10:52:17.202636       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1101 10:52:17.205208       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1101 10:52:17.205420       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1101 10:52:17.206789       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1101 10:52:17.206978       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1101 10:52:17.207099       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 10:52:17.207143       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1101 10:52:17.210464       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1101 10:52:17.219968       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:52:17.234116       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:52:17.234140       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 10:52:17.234148       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	E1101 10:53:11.568407       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 10:53:11.570135       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 10:53:11.589267       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 10:53:11.589571       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 10:53:11.598266       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 10:53:11.602529       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [14ba992862f8d7c1f6164f45ff4c5cccd522cf9a0ea0c5366eb404e264b85486] <==
	I1101 10:51:59.666375       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1101 10:51:59.667366       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-269105&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 10:52:00.998574       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-269105&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 10:52:02.611496       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-269105&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 10:52:07.585461       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-269105&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1101 10:52:15.166775       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 10:52:15.166817       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1101 10:52:15.167020       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 10:52:15.188281       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 10:52:15.188346       1 server_linux.go:132] "Using iptables Proxier"
	I1101 10:52:15.192755       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 10:52:15.193319       1 server.go:527] "Version info" version="v1.34.1"
	I1101 10:52:15.193358       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:52:15.196122       1 config.go:106] "Starting endpoint slice config controller"
	I1101 10:52:15.196269       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 10:52:15.196657       1 config.go:200] "Starting service config controller"
	I1101 10:52:15.196764       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 10:52:15.197129       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 10:52:15.197190       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 10:52:15.197699       1 config.go:309] "Starting node config controller"
	I1101 10:52:15.197762       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 10:52:15.197814       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 10:52:15.297002       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 10:52:15.297073       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 10:52:15.297332       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [baed3d2fb8f06f29a4fb89c40452ef21b9d69908da32eda6c00e74855e85bcf2] <==
	I1101 10:51:16.482515       1 server_linux.go:53] "Using iptables proxy"
	I1101 10:51:16.607250       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 10:51:16.709879       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 10:51:16.710120       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1101 10:51:16.710327       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 10:51:16.741290       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 10:51:16.741515       1 server_linux.go:132] "Using iptables Proxier"
	I1101 10:51:16.760080       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 10:51:16.760415       1 server.go:527] "Version info" version="v1.34.1"
	I1101 10:51:16.760439       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:51:16.761885       1 config.go:200] "Starting service config controller"
	I1101 10:51:16.761901       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 10:51:16.761920       1 config.go:106] "Starting endpoint slice config controller"
	I1101 10:51:16.761924       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 10:51:16.761937       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 10:51:16.761941       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 10:51:16.762634       1 config.go:309] "Starting node config controller"
	I1101 10:51:16.762648       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 10:51:16.762654       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 10:51:16.862927       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 10:51:16.863027       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 10:51:16.863052       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [60cd23f46599220cd3ae9cf8e8c43ee41efa10f43df75b554890316fd0090f27] <==
	E1101 10:51:08.377524       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 10:51:08.378237       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 10:51:08.378447       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 10:51:08.378670       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 10:51:08.378892       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 10:51:08.379145       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 10:51:08.379372       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 10:51:08.379709       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 10:51:08.379990       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 10:51:08.380198       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 10:51:08.380392       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 10:51:08.380585       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 10:51:08.380759       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 10:51:08.384441       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 10:51:08.384521       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 10:51:08.384726       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1101 10:51:08.384806       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 10:51:08.385657       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	I1101 10:51:09.666830       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:51:56.108553       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1101 10:51:56.108589       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1101 10:51:56.108612       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1101 10:51:56.108656       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:51:56.108923       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1101 10:51:56.108938       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [e27b4d294d5a2f75c8ca910bc1e0ffa8be05b4fae67fa8834535b131fc0c1873] <==
	E1101 10:52:03.387752       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 10:52:03.398994       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 10:52:03.704914       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1101 10:52:03.754688       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 10:52:04.016177       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 10:52:06.478373       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 10:52:06.637771       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 10:52:06.758910       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 10:52:06.838932       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 10:52:06.933123       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 10:52:07.456110       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 10:52:07.492739       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 10:52:07.715371       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 10:52:07.754106       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 10:52:07.814994       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 10:52:07.822770       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 10:52:07.839521       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 10:52:07.945770       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 10:52:08.077896       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1101 10:52:08.236566       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 10:52:08.240307       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 10:52:08.614919       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 10:52:08.676716       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 10:52:09.044806       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	I1101 10:52:15.998505       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 10:56:07 functional-269105 kubelet[4609]: E1101 10:56:07.717892    4609 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Nov 01 10:56:07 functional-269105 kubelet[4609]: E1101 10:56:07.717969    4609 kuberuntime_manager.go:1449] "Unhandled Error" err="container kubernetes-dashboard start failed in pod kubernetes-dashboard-855c9754f9-bcspl_kubernetes-dashboard(a35d85fa-a948-46ed-9bc5-a3e3dcd9a648): ErrImagePull: failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Nov 01 10:56:07 functional-269105 kubelet[4609]: E1101 10:56:07.718027    4609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-bcspl" podUID="a35d85fa-a948-46ed-9bc5-a3e3dcd9a648"
	Nov 01 10:56:13 functional-269105 kubelet[4609]: E1101 10:56:13.618379    4609 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Nov 01 10:56:13 functional-269105 kubelet[4609]: E1101 10:56:13.618452    4609 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Nov 01 10:56:13 functional-269105 kubelet[4609]: E1101 10:56:13.618536    4609 kuberuntime_manager.go:1449] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-77bf4d6c4c-cjngd_kubernetes-dashboard(ea4196d0-795a-4917-87ce-f61ae24a5972): ErrImagePull: failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Nov 01 10:56:13 functional-269105 kubelet[4609]: E1101 10:56:13.618577    4609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-cjngd" podUID="ea4196d0-795a-4917-87ce-f61ae24a5972"
	Nov 01 10:56:21 functional-269105 kubelet[4609]: E1101 10:56:21.174666    4609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-bcspl" podUID="a35d85fa-a948-46ed-9bc5-a3e3dcd9a648"
	Nov 01 10:56:26 functional-269105 kubelet[4609]: E1101 10:56:26.175688    4609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-cjngd" podUID="ea4196d0-795a-4917-87ce-f61a
e24a5972"
	Nov 01 10:56:36 functional-269105 kubelet[4609]: E1101 10:56:36.175534    4609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-bcspl" podUID="a35d85fa-a948-46ed-9bc5-a3e3dcd9a648"
	Nov 01 10:56:38 functional-269105 kubelet[4609]: E1101 10:56:38.174893    4609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-cjngd" podUID="ea4196d0-795a-4917-87ce-f61a
e24a5972"
	Nov 01 10:56:49 functional-269105 kubelet[4609]: E1101 10:56:49.174134    4609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-bcspl" podUID="a35d85fa-a948-46ed-9bc5-a3e3dcd9a648"
	Nov 01 10:56:53 functional-269105 kubelet[4609]: E1101 10:56:53.175070    4609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-cjngd" podUID="ea4196d0-795a-4917-87ce-f61a
e24a5972"
	Nov 01 10:57:01 functional-269105 kubelet[4609]: E1101 10:57:01.175110    4609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-bcspl" podUID="a35d85fa-a948-46ed-9bc5-a3e3dcd9a648"
	Nov 01 10:57:04 functional-269105 kubelet[4609]: E1101 10:57:04.177447    4609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-cjngd" podUID="ea4196d0-795a-4917-87ce-f61a
e24a5972"
	Nov 01 10:57:14 functional-269105 kubelet[4609]: E1101 10:57:14.175015    4609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-bcspl" podUID="a35d85fa-a948-46ed-9bc5-a3e3dcd9a648"
	Nov 01 10:57:17 functional-269105 kubelet[4609]: E1101 10:57:17.174678    4609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-cjngd" podUID="ea4196d0-795a-4917-87ce-f61a
e24a5972"
	Nov 01 10:57:26 functional-269105 kubelet[4609]: E1101 10:57:26.174693    4609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-bcspl" podUID="a35d85fa-a948-46ed-9bc5-a3e3dcd9a648"
	Nov 01 10:57:29 functional-269105 kubelet[4609]: E1101 10:57:29.174724    4609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-cjngd" podUID="ea4196d0-795a-4917-87ce-f61a
e24a5972"
	Nov 01 10:57:37 functional-269105 kubelet[4609]: E1101 10:57:37.174448    4609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-bcspl" podUID="a35d85fa-a948-46ed-9bc5-a3e3dcd9a648"
	Nov 01 10:57:44 functional-269105 kubelet[4609]: E1101 10:57:44.175231    4609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-cjngd" podUID="ea4196d0-795a-4917-87ce-f61a
e24a5972"
	Nov 01 10:57:48 functional-269105 kubelet[4609]: E1101 10:57:48.175229    4609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-bcspl" podUID="a35d85fa-a948-46ed-9bc5-a3e3dcd9a648"
	Nov 01 10:57:55 functional-269105 kubelet[4609]: E1101 10:57:55.175741    4609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-cjngd" podUID="ea4196d0-795a-4917-87ce-f61a
e24a5972"
	Nov 01 10:58:00 functional-269105 kubelet[4609]: E1101 10:58:00.177728    4609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-bcspl" podUID="a35d85fa-a948-46ed-9bc5-a3e3dcd9a648"
	Nov 01 10:58:07 functional-269105 kubelet[4609]: E1101 10:58:07.174443    4609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-cjngd" podUID="ea4196d0-795a-4917-87ce-f61a
e24a5972"
	
	
	==> storage-provisioner [5fe425c496959a4e66b47431be04adee713124560f22706c4d211802311c377d] <==
	W1101 10:57:47.410151       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:57:49.412811       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:57:49.419175       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:57:51.422141       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:57:51.428625       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:57:53.431498       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:57:53.435943       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:57:55.438729       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:57:55.443286       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:57:57.446496       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:57:57.451339       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:57:59.455155       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:57:59.462431       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:58:01.465119       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:58:01.471799       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:58:03.474906       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:58:03.479549       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:58:05.482671       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:58:05.489213       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:58:07.492863       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:58:07.497481       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:58:09.501559       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:58:09.508463       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:58:11.511758       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:58:11.516933       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [91c3f7ce6c557d31d2556cad6567e96e0a92b868430e3e0debaf01906bb9de59] <==
	I1101 10:51:59.457289       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1101 10:51:59.460650       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-269105 -n functional-269105
helpers_test.go:269: (dbg) Run:  kubectl --context functional-269105 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount dashboard-metrics-scraper-77bf4d6c4c-cjngd kubernetes-dashboard-855c9754f9-bcspl
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-269105 describe pod busybox-mount dashboard-metrics-scraper-77bf4d6c4c-cjngd kubernetes-dashboard-855c9754f9-bcspl
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-269105 describe pod busybox-mount dashboard-metrics-scraper-77bf4d6c4c-cjngd kubernetes-dashboard-855c9754f9-bcspl: exit status 1 (87.807233ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-269105/192.168.49.2
	Start Time:       Sat, 01 Nov 2025 10:53:08 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  mount-munger:
	    Container ID:  containerd://c5ce950fccd378e7cf73fd7f99f12ab3311e639ce1ab582b327db40c7354bcd0
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sat, 01 Nov 2025 10:53:11 +0000
	      Finished:     Sat, 01 Nov 2025 10:53:11 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qpt57 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-qpt57:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  5m4s  default-scheduler  Successfully assigned default/busybox-mount to functional-269105
	  Normal  Pulling    5m4s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     5m2s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.221s (2.221s including waiting). Image size: 1935750 bytes.
	  Normal  Created    5m2s  kubelet            Created container: mount-munger
	  Normal  Started    5m1s  kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-cjngd" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-bcspl" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-269105 describe pod busybox-mount dashboard-metrics-scraper-77bf4d6c4c-cjngd kubernetes-dashboard-855c9754f9-bcspl: exit status 1
--- FAIL: TestFunctional/parallel/DashboardCmd (302.51s)

                                                
                                    
x
+
TestKubernetesUpgrade (539.09s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-847244 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-847244 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (36.717510505s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-847244
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-847244: (1.485809515s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-847244 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-847244 status --format={{.Host}}: exit status 7 (71.667994ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-847244 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1101 11:26:01.181253 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/addons-442433/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-847244 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (30.52585127s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-847244 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-847244 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-847244 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 106 (91.234841ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-847244] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21830
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21830-2847530/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-2847530/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-847244
	    minikube start -p kubernetes-upgrade-847244 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8472442 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-847244 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-847244 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-847244 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: exit status 105 (6m14.974165228s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-847244] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21830
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21830-2847530/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-2847530/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "kubernetes-upgrade-847244" primary control-plane node in "kubernetes-upgrade-847244" cluster
	* Pulling base image v0.0.48-1760939008-21773 ...
	* Preparing Kubernetes v1.34.1 on containerd 1.7.28 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	
	* Enabled addons: 
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 11:26:09.733499 3018443 out.go:360] Setting OutFile to fd 1 ...
	I1101 11:26:09.733695 3018443 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 11:26:09.733722 3018443 out.go:374] Setting ErrFile to fd 2...
	I1101 11:26:09.733740 3018443 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 11:26:09.734438 3018443 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-2847530/.minikube/bin
	I1101 11:26:09.734846 3018443 out.go:368] Setting JSON to false
	I1101 11:26:09.735885 3018443 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":72516,"bootTime":1761923854,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1101 11:26:09.735953 3018443 start.go:143] virtualization:  
	I1101 11:26:09.739150 3018443 out.go:179] * [kubernetes-upgrade-847244] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 11:26:09.742950 3018443 out.go:179]   - MINIKUBE_LOCATION=21830
	I1101 11:26:09.743071 3018443 notify.go:221] Checking for updates...
	I1101 11:26:09.749150 3018443 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 11:26:09.751986 3018443 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21830-2847530/kubeconfig
	I1101 11:26:09.755033 3018443 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-2847530/.minikube
	I1101 11:26:09.757761 3018443 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 11:26:09.760491 3018443 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 11:26:09.763586 3018443 config.go:182] Loaded profile config "kubernetes-upgrade-847244": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1101 11:26:09.764250 3018443 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 11:26:09.805963 3018443 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 11:26:09.806075 3018443 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 11:26:09.862409 3018443 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-01 11:26:09.852426357 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 11:26:09.862506 3018443 docker.go:319] overlay module found
	I1101 11:26:09.865712 3018443 out.go:179] * Using the docker driver based on existing profile
	I1101 11:26:09.868622 3018443 start.go:309] selected driver: docker
	I1101 11:26:09.868640 3018443 start.go:930] validating driver "docker" against &{Name:kubernetes-upgrade-847244 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-847244 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 11:26:09.868743 3018443 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 11:26:09.869485 3018443 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 11:26:09.928158 3018443 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-01 11:26:09.918071821 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 11:26:09.928502 3018443 cni.go:84] Creating CNI manager for ""
	I1101 11:26:09.928575 3018443 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1101 11:26:09.928610 3018443 start.go:353] cluster config:
	{Name:kubernetes-upgrade-847244 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-847244 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 11:26:09.931717 3018443 out.go:179] * Starting "kubernetes-upgrade-847244" primary control-plane node in "kubernetes-upgrade-847244" cluster
	I1101 11:26:09.934551 3018443 cache.go:124] Beginning downloading kic base image for docker with containerd
	I1101 11:26:09.937368 3018443 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 11:26:09.940247 3018443 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1101 11:26:09.940306 3018443 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21830-2847530/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	I1101 11:26:09.940317 3018443 cache.go:59] Caching tarball of preloaded images
	I1101 11:26:09.940276 3018443 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 11:26:09.940410 3018443 preload.go:233] Found /home/jenkins/minikube-integration/21830-2847530/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1101 11:26:09.940420 3018443 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1101 11:26:09.940522 3018443 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/kubernetes-upgrade-847244/config.json ...
	I1101 11:26:09.957507 3018443 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 11:26:09.957533 3018443 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 11:26:09.957552 3018443 cache.go:233] Successfully downloaded all kic artifacts
	I1101 11:26:09.957577 3018443 start.go:360] acquireMachinesLock for kubernetes-upgrade-847244: {Name:mk477bee6924246d62a49e783e2db811cba8adfd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 11:26:09.957633 3018443 start.go:364] duration metric: took 35.125µs to acquireMachinesLock for "kubernetes-upgrade-847244"
	I1101 11:26:09.957657 3018443 start.go:96] Skipping create...Using existing machine configuration
	I1101 11:26:09.957666 3018443 fix.go:54] fixHost starting: 
	I1101 11:26:09.957921 3018443 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-847244 --format={{.State.Status}}
	I1101 11:26:09.982651 3018443 fix.go:112] recreateIfNeeded on kubernetes-upgrade-847244: state=Running err=<nil>
	W1101 11:26:09.982686 3018443 fix.go:138] unexpected machine state, will restart: <nil>
	I1101 11:26:09.985990 3018443 out.go:252] * Updating the running docker "kubernetes-upgrade-847244" container ...
	I1101 11:26:09.986029 3018443 machine.go:94] provisionDockerMachine start ...
	I1101 11:26:09.986114 3018443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-847244
	I1101 11:26:10.013198 3018443 main.go:143] libmachine: Using SSH client type: native
	I1101 11:26:10.013636 3018443 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 37031 <nil> <nil>}
	I1101 11:26:10.013655 3018443 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 11:26:10.172938 3018443 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-847244
	
	I1101 11:26:10.173038 3018443 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-847244"
	I1101 11:26:10.173138 3018443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-847244
	I1101 11:26:10.191405 3018443 main.go:143] libmachine: Using SSH client type: native
	I1101 11:26:10.191717 3018443 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 37031 <nil> <nil>}
	I1101 11:26:10.191728 3018443 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-847244 && echo "kubernetes-upgrade-847244" | sudo tee /etc/hostname
	I1101 11:26:10.353305 3018443 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-847244
	
	I1101 11:26:10.353380 3018443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-847244
	I1101 11:26:10.373368 3018443 main.go:143] libmachine: Using SSH client type: native
	I1101 11:26:10.373683 3018443 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 37031 <nil> <nil>}
	I1101 11:26:10.373706 3018443 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-847244' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-847244/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-847244' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 11:26:10.536018 3018443 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 11:26:10.536041 3018443 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21830-2847530/.minikube CaCertPath:/home/jenkins/minikube-integration/21830-2847530/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21830-2847530/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21830-2847530/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21830-2847530/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21830-2847530/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21830-2847530/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21830-2847530/.minikube}
	I1101 11:26:10.536068 3018443 ubuntu.go:190] setting up certificates
	I1101 11:26:10.536084 3018443 provision.go:84] configureAuth start
	I1101 11:26:10.536154 3018443 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-847244
	I1101 11:26:10.553571 3018443 provision.go:143] copyHostCerts
	I1101 11:26:10.553642 3018443 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-2847530/.minikube/ca.pem, removing ...
	I1101 11:26:10.553663 3018443 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-2847530/.minikube/ca.pem
	I1101 11:26:10.553747 3018443 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-2847530/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21830-2847530/.minikube/ca.pem (1082 bytes)
	I1101 11:26:10.553853 3018443 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-2847530/.minikube/cert.pem, removing ...
	I1101 11:26:10.553864 3018443 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-2847530/.minikube/cert.pem
	I1101 11:26:10.553891 3018443 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-2847530/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21830-2847530/.minikube/cert.pem (1123 bytes)
	I1101 11:26:10.553954 3018443 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-2847530/.minikube/key.pem, removing ...
	I1101 11:26:10.553962 3018443 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-2847530/.minikube/key.pem
	I1101 11:26:10.553986 3018443 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-2847530/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21830-2847530/.minikube/key.pem (1675 bytes)
	I1101 11:26:10.554037 3018443 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21830-2847530/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21830-2847530/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21830-2847530/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-847244 san=[127.0.0.1 192.168.76.2 kubernetes-upgrade-847244 localhost minikube]
	I1101 11:26:10.810060 3018443 provision.go:177] copyRemoteCerts
	I1101 11:26:10.810173 3018443 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 11:26:10.810250 3018443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-847244
	I1101 11:26:10.830315 3018443 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37031 SSHKeyPath:/home/jenkins/minikube-integration/21830-2847530/.minikube/machines/kubernetes-upgrade-847244/id_rsa Username:docker}
	I1101 11:26:10.940355 3018443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-2847530/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1101 11:26:10.958218 3018443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-2847530/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 11:26:10.977647 3018443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-2847530/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 11:26:11.001283 3018443 provision.go:87] duration metric: took 465.164946ms to configureAuth
	I1101 11:26:11.001376 3018443 ubuntu.go:206] setting minikube options for container-runtime
	I1101 11:26:11.001665 3018443 config.go:182] Loaded profile config "kubernetes-upgrade-847244": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1101 11:26:11.001709 3018443 machine.go:97] duration metric: took 1.015672429s to provisionDockerMachine
	I1101 11:26:11.001738 3018443 start.go:293] postStartSetup for "kubernetes-upgrade-847244" (driver="docker")
	I1101 11:26:11.001777 3018443 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 11:26:11.001891 3018443 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 11:26:11.001968 3018443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-847244
	I1101 11:26:11.026457 3018443 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37031 SSHKeyPath:/home/jenkins/minikube-integration/21830-2847530/.minikube/machines/kubernetes-upgrade-847244/id_rsa Username:docker}
	I1101 11:26:11.164644 3018443 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 11:26:11.182186 3018443 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 11:26:11.182218 3018443 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 11:26:11.182230 3018443 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-2847530/.minikube/addons for local assets ...
	I1101 11:26:11.182293 3018443 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-2847530/.minikube/files for local assets ...
	I1101 11:26:11.182381 3018443 filesync.go:149] local asset: /home/jenkins/minikube-integration/21830-2847530/.minikube/files/etc/ssl/certs/28494222.pem -> 28494222.pem in /etc/ssl/certs
	I1101 11:26:11.182487 3018443 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 11:26:11.215195 3018443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-2847530/.minikube/files/etc/ssl/certs/28494222.pem --> /etc/ssl/certs/28494222.pem (1708 bytes)
	I1101 11:26:11.279928 3018443 start.go:296] duration metric: took 278.148834ms for postStartSetup
	I1101 11:26:11.280030 3018443 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 11:26:11.280104 3018443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-847244
	I1101 11:26:11.302438 3018443 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37031 SSHKeyPath:/home/jenkins/minikube-integration/21830-2847530/.minikube/machines/kubernetes-upgrade-847244/id_rsa Username:docker}
	I1101 11:26:11.462893 3018443 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 11:26:11.468494 3018443 fix.go:56] duration metric: took 1.510821702s for fixHost
	I1101 11:26:11.468516 3018443 start.go:83] releasing machines lock for "kubernetes-upgrade-847244", held for 1.510870751s
	I1101 11:26:11.468595 3018443 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-847244
	I1101 11:26:11.492092 3018443 ssh_runner.go:195] Run: cat /version.json
	I1101 11:26:11.492141 3018443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-847244
	I1101 11:26:11.496495 3018443 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 11:26:11.496595 3018443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-847244
	I1101 11:26:11.539321 3018443 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37031 SSHKeyPath:/home/jenkins/minikube-integration/21830-2847530/.minikube/machines/kubernetes-upgrade-847244/id_rsa Username:docker}
	I1101 11:26:11.541742 3018443 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37031 SSHKeyPath:/home/jenkins/minikube-integration/21830-2847530/.minikube/machines/kubernetes-upgrade-847244/id_rsa Username:docker}
	I1101 11:26:11.725606 3018443 ssh_runner.go:195] Run: systemctl --version
	I1101 11:26:11.838630 3018443 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 11:26:11.843106 3018443 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 11:26:11.843192 3018443 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 11:26:11.851608 3018443 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 11:26:11.851633 3018443 start.go:496] detecting cgroup driver to use...
	I1101 11:26:11.851671 3018443 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 11:26:11.851739 3018443 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1101 11:26:11.891202 3018443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1101 11:26:11.905893 3018443 docker.go:218] disabling cri-docker service (if available) ...
	I1101 11:26:11.905977 3018443 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 11:26:11.930158 3018443 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 11:26:11.945074 3018443 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 11:26:12.357824 3018443 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 11:26:12.633038 3018443 docker.go:234] disabling docker service ...
	I1101 11:26:12.633117 3018443 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 11:26:12.667380 3018443 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 11:26:12.683303 3018443 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 11:26:12.880708 3018443 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 11:26:13.080970 3018443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 11:26:13.102068 3018443 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 11:26:13.130094 3018443 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1101 11:26:13.150011 3018443 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1101 11:26:13.164247 3018443 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1101 11:26:13.164362 3018443 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1101 11:26:13.174338 3018443 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1101 11:26:13.190328 3018443 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1101 11:26:13.200415 3018443 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1101 11:26:13.213007 3018443 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 11:26:13.229172 3018443 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1101 11:26:13.244745 3018443 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1101 11:26:13.266682 3018443 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1101 11:26:13.299503 3018443 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 11:26:13.329363 3018443 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 11:26:13.366626 3018443 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 11:26:13.668356 3018443 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1101 11:26:13.996649 3018443 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1101 11:26:13.996770 3018443 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1101 11:26:14.002305 3018443 start.go:564] Will wait 60s for crictl version
	I1101 11:26:14.002497 3018443 ssh_runner.go:195] Run: which crictl
	I1101 11:26:14.010146 3018443 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 11:26:14.066029 3018443 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.28
	RuntimeApiVersion:  v1
	I1101 11:26:14.066191 3018443 ssh_runner.go:195] Run: containerd --version
	I1101 11:26:14.127201 3018443 ssh_runner.go:195] Run: containerd --version
	I1101 11:26:14.169628 3018443 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 1.7.28 ...
	I1101 11:26:14.172534 3018443 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-847244 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 11:26:14.194171 3018443 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1101 11:26:14.198386 3018443 kubeadm.go:884] updating cluster {Name:kubernetes-upgrade-847244 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-847244 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 11:26:14.198500 3018443 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1101 11:26:14.198562 3018443 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 11:26:14.224989 3018443 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-proxy:v1.34.1". assuming images are not preloaded.
	I1101 11:26:14.225060 3018443 ssh_runner.go:195] Run: which lz4
	I1101 11:26:14.229825 3018443 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1101 11:26:14.233712 3018443 ssh_runner.go:356] copy: skipping /preloaded.tar.lz4 (exists)
	I1101 11:26:14.233736 3018443 containerd.go:563] duration metric: took 3.958533ms to copy over tarball
	I1101 11:26:14.233787 3018443 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1101 11:26:20.457402 3018443 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (6.223586837s)
	I1101 11:26:20.457471 3018443 kubeadm.go:910] preload failed, will try to load cached images: extracting tarball: 
	** stderr ** 
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Europe: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Brazil: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Canada: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Antarctica: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Chile: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Etc: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Pacific: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Mexico: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Australia: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/US: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Asia: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Atlantic: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/America: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Arctic: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Africa: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Indian: Cannot open: File exists
	tar: Exiting with failure status due to previous errors
	
	** /stderr **: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: Process exited with status 2
	stdout:
	
	stderr:
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Europe: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Brazil: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Canada: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Antarctica: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Chile: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Etc: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Pacific: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Mexico: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Australia: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/US: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Asia: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Atlantic: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/America: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Arctic: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Africa: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Indian: Cannot open: File exists
	tar: Exiting with failure status due to previous errors
	I1101 11:26:20.457563 3018443 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 11:26:20.485236 3018443 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-proxy:v1.34.1". assuming images are not preloaded.
	I1101 11:26:20.485259 3018443 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1101 11:26:20.485318 3018443 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 11:26:20.485523 3018443 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1101 11:26:20.485678 3018443 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1101 11:26:20.485783 3018443 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1101 11:26:20.485858 3018443 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1101 11:26:20.485956 3018443 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1101 11:26:20.486039 3018443 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1101 11:26:20.486111 3018443 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1101 11:26:20.487545 3018443 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1101 11:26:20.488119 3018443 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 11:26:20.488484 3018443 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1101 11:26:20.488617 3018443 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1101 11:26:20.488697 3018443 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1101 11:26:20.488811 3018443 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1101 11:26:20.488863 3018443 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1101 11:26:20.489014 3018443 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1101 11:26:20.815592 3018443 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.34.1" and sha "7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a"
	I1101 11:26:20.815693 3018443 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.34.1
	I1101 11:26:20.819322 3018443 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.34.1" and sha "b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0"
	I1101 11:26:20.819386 3018443 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.34.1
	I1101 11:26:20.825261 3018443 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.4-0" and sha "a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e"
	I1101 11:26:20.825333 3018443 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.4-0
	I1101 11:26:20.825655 3018443 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.34.1" and sha "43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196"
	I1101 11:26:20.825699 3018443 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.34.1
	I1101 11:26:20.825787 3018443 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.34.1" and sha "05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9"
	I1101 11:26:20.825817 3018443 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.34.1
	I1101 11:26:20.833597 3018443 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd"
	I1101 11:26:20.833668 3018443 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1101 11:26:20.835529 3018443 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.12.1" and sha "138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc"
	I1101 11:26:20.835589 3018443 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.12.1
	I1101 11:26:21.080302 3018443 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a" in container runtime
	I1101 11:26:21.080354 3018443 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1101 11:26:21.080404 3018443 ssh_runner.go:195] Run: which crictl
	I1101 11:26:21.080490 3018443 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0" in container runtime
	I1101 11:26:21.080504 3018443 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1101 11:26:21.080526 3018443 ssh_runner.go:195] Run: which crictl
	I1101 11:26:21.081337 3018443 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196" in container runtime
	I1101 11:26:21.081367 3018443 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1101 11:26:21.081403 3018443 ssh_runner.go:195] Run: which crictl
	I1101 11:26:21.093433 3018443 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9" in container runtime
	I1101 11:26:21.093471 3018443 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1101 11:26:21.093519 3018443 ssh_runner.go:195] Run: which crictl
	I1101 11:26:21.093627 3018443 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1101 11:26:21.093644 3018443 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1101 11:26:21.093668 3018443 ssh_runner.go:195] Run: which crictl
	I1101 11:26:21.093749 3018443 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc" in container runtime
	I1101 11:26:21.093764 3018443 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1101 11:26:21.093787 3018443 ssh_runner.go:195] Run: which crictl
	I1101 11:26:21.093839 3018443 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1101 11:26:21.093877 3018443 cache_images.go:118] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e" in container runtime
	I1101 11:26:21.093892 3018443 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1101 11:26:21.093913 3018443 ssh_runner.go:195] Run: which crictl
	I1101 11:26:21.093961 3018443 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1101 11:26:21.097447 3018443 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1101 11:26:21.267062 3018443 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21830-2847530/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1101 11:26:21.267139 3018443 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1101 11:26:21.267198 3018443 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1101 11:26:21.267241 3018443 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1101 11:26:21.267268 3018443 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21830-2847530/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1101 11:26:21.267299 3018443 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1101 11:26:21.270749 3018443 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21830-2847530/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1101 11:26:21.417783 3018443 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21830-2847530/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1101 11:26:21.417791 3018443 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1101 11:26:21.418137 3018443 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1101 11:26:21.418727 3018443 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1101 11:26:21.484613 3018443 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1101 11:26:21.484697 3018443 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1101 11:26:21.484746 3018443 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1101 11:26:21.584082 3018443 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21830-2847530/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1101 11:26:21.584110 3018443 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21830-2847530/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1101 11:26:21.584082 3018443 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21830-2847530/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	W1101 11:26:21.905137 3018443 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1101 11:26:21.905288 3018443 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51"
	I1101 11:26:21.905357 3018443 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 11:26:21.929317 3018443 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1101 11:26:21.929365 3018443 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 11:26:21.929416 3018443 ssh_runner.go:195] Run: which crictl
	I1101 11:26:21.933053 3018443 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 11:26:21.972397 3018443 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 11:26:22.005242 3018443 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 11:26:22.035292 3018443 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21830-2847530/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1101 11:26:22.035394 3018443 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1101 11:26:22.040159 3018443 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1101 11:26:22.040180 3018443 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1101 11:26:22.040231 3018443 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1101 11:26:22.255239 3018443 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21830-2847530/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1101 11:26:22.255293 3018443 cache_images.go:94] duration metric: took 1.770022219s to LoadCachedImages
	W1101 11:26:22.255358 3018443 out.go:285] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/21830-2847530/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/21830-2847530/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1: no such file or directory
	I1101 11:26:22.255367 3018443 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 containerd true true} ...
	I1101 11:26:22.255469 3018443 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-847244 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-847244 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 11:26:22.255529 3018443 ssh_runner.go:195] Run: sudo crictl info
	I1101 11:26:22.303927 3018443 cni.go:84] Creating CNI manager for ""
	I1101 11:26:22.303951 3018443 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1101 11:26:22.303969 3018443 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 11:26:22.303991 3018443 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-847244 NodeName:kubernetes-upgrade-847244 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 11:26:22.304107 3018443 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "kubernetes-upgrade-847244"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 11:26:22.304175 3018443 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 11:26:22.315707 3018443 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 11:26:22.315770 3018443 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 11:26:22.328228 3018443 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (329 bytes)
	I1101 11:26:22.343386 3018443 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 11:26:22.361755 3018443 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2238 bytes)
	I1101 11:26:22.413587 3018443 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1101 11:26:22.417803 3018443 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 11:26:22.589082 3018443 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 11:26:22.603111 3018443 certs.go:69] Setting up /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/kubernetes-upgrade-847244 for IP: 192.168.76.2
	I1101 11:26:22.603191 3018443 certs.go:195] generating shared ca certs ...
	I1101 11:26:22.603215 3018443 certs.go:227] acquiring lock for ca certs: {Name:mkb1fca73e716ecaa17fb23194b5757ed73c3505 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:26:22.603375 3018443 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21830-2847530/.minikube/ca.key
	I1101 11:26:22.603420 3018443 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21830-2847530/.minikube/proxy-client-ca.key
	I1101 11:26:22.603494 3018443 certs.go:257] generating profile certs ...
	I1101 11:26:22.603655 3018443 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/kubernetes-upgrade-847244/client.key
	I1101 11:26:22.603721 3018443 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/kubernetes-upgrade-847244/apiserver.key.992751a1
	I1101 11:26:22.603759 3018443 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/kubernetes-upgrade-847244/proxy-client.key
	I1101 11:26:22.603972 3018443 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-2847530/.minikube/certs/2849422.pem (1338 bytes)
	W1101 11:26:22.604022 3018443 certs.go:480] ignoring /home/jenkins/minikube-integration/21830-2847530/.minikube/certs/2849422_empty.pem, impossibly tiny 0 bytes
	I1101 11:26:22.604031 3018443 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-2847530/.minikube/certs/ca-key.pem (1679 bytes)
	I1101 11:26:22.604057 3018443 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-2847530/.minikube/certs/ca.pem (1082 bytes)
	I1101 11:26:22.604082 3018443 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-2847530/.minikube/certs/cert.pem (1123 bytes)
	I1101 11:26:22.604103 3018443 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-2847530/.minikube/certs/key.pem (1675 bytes)
	I1101 11:26:22.604143 3018443 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-2847530/.minikube/files/etc/ssl/certs/28494222.pem (1708 bytes)
	I1101 11:26:22.605148 3018443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-2847530/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 11:26:22.653683 3018443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-2847530/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 11:26:22.700143 3018443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-2847530/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 11:26:22.733456 3018443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-2847530/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 11:26:22.759193 3018443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/kubernetes-upgrade-847244/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1101 11:26:22.783506 3018443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/kubernetes-upgrade-847244/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 11:26:22.802573 3018443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/kubernetes-upgrade-847244/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 11:26:22.822647 3018443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/kubernetes-upgrade-847244/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 11:26:22.845425 3018443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-2847530/.minikube/certs/2849422.pem --> /usr/share/ca-certificates/2849422.pem (1338 bytes)
	I1101 11:26:22.866291 3018443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-2847530/.minikube/files/etc/ssl/certs/28494222.pem --> /usr/share/ca-certificates/28494222.pem (1708 bytes)
	I1101 11:26:22.886194 3018443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-2847530/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 11:26:22.904817 3018443 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 11:26:22.919063 3018443 ssh_runner.go:195] Run: openssl version
	I1101 11:26:22.925934 3018443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2849422.pem && ln -fs /usr/share/ca-certificates/2849422.pem /etc/ssl/certs/2849422.pem"
	I1101 11:26:22.934877 3018443 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2849422.pem
	I1101 11:26:22.938852 3018443 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 10:50 /usr/share/ca-certificates/2849422.pem
	I1101 11:26:22.938967 3018443 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2849422.pem
	I1101 11:26:22.985483 3018443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2849422.pem /etc/ssl/certs/51391683.0"
	I1101 11:26:22.994780 3018443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/28494222.pem && ln -fs /usr/share/ca-certificates/28494222.pem /etc/ssl/certs/28494222.pem"
	I1101 11:26:23.005678 3018443 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/28494222.pem
	I1101 11:26:23.010158 3018443 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 10:50 /usr/share/ca-certificates/28494222.pem
	I1101 11:26:23.010295 3018443 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/28494222.pem
	I1101 11:26:23.056996 3018443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/28494222.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 11:26:23.067237 3018443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 11:26:23.076446 3018443 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:26:23.080984 3018443 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 10:43 /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:26:23.081112 3018443 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:26:23.124953 3018443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 11:26:23.133999 3018443 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 11:26:23.138279 3018443 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 11:26:23.185368 3018443 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 11:26:23.227354 3018443 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 11:26:23.277594 3018443 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 11:26:23.325252 3018443 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 11:26:23.370824 3018443 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 11:26:23.413610 3018443 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-847244 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-847244 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 11:26:23.413758 3018443 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1101 11:26:23.413846 3018443 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 11:26:23.446019 3018443 cri.go:89] found id: "69ec9e3d28dec3556394d63d9d6fb3cf076ded8e0567647dff572890faf35ef6"
	I1101 11:26:23.446082 3018443 cri.go:89] found id: ""
	I1101 11:26:23.446159 3018443 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1101 11:26:23.477713 3018443 cri.go:116] JSON = [{"ociVersion":"1.2.1","id":"22bde59d8921d9502ca8ed26712b96607872b31079ced3630fd463e59c5f63c2","pid":0,"status":"stopped","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/22bde59d8921d9502ca8ed26712b96607872b31079ced3630fd463e59c5f63c2","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/22bde59d8921d9502ca8ed26712b96607872b31079ced3630fd463e59c5f63c2/rootfs","created":"2025-11-01T11:25:58.49708399Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"22bde59d8921d9502ca8ed26712b96607872b31079ced3630fd463e59c5f63c2","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-kubernetes-upgrade-847244_6d4336940423c55c91f8c91949e19e7c","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-kubernetes-upgrade-847244","io.k
ubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"6d4336940423c55c91f8c91949e19e7c"},"owner":"root"},{"ociVersion":"1.2.1","id":"3c3fa9534f80f64f5d9d7be10f9d316ca87876d8747dcf0bbd668d8cda5de1dc","pid":1551,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3c3fa9534f80f64f5d9d7be10f9d316ca87876d8747dcf0bbd668d8cda5de1dc","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3c3fa9534f80f64f5d9d7be10f9d316ca87876d8747dcf0bbd668d8cda5de1dc/rootfs","created":"2025-11-01T11:26:02.143706773Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-apiserver:v1.34.1","io.kubernetes.cri.sandbox-id":"6467914707a2e0c8026d1548e395e6b34edfc7d484526354c95d082373c3760a","io.kubernetes.cri.sandbox-name":"kube-apiserver-kubernetes-upgrade-847244","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"bfceec379067d2bd
6b2a14b09422f313"},"owner":"root"},{"ociVersion":"1.2.1","id":"4de1031a74db1b0f5a6369308de477a52ac5fd1f99f87367af744580629f84b5","pid":2216,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4de1031a74db1b0f5a6369308de477a52ac5fd1f99f87367af744580629f84b5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4de1031a74db1b0f5a6369308de477a52ac5fd1f99f87367af744580629f84b5/rootfs","created":"2025-11-01T11:26:14.466833658Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"4de1031a74db1b0f5a6369308de477a52ac5fd1f99f87367af744580629f84b5","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-kubernetes-upgrade-847244_a578269af95b9cb5f363557cea9f3e5d","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-kubernetes-upgrade-847244","io.k
ubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"a578269af95b9cb5f363557cea9f3e5d"},"owner":"root"},{"ociVersion":"1.2.1","id":"5c48ec0531aa49f428ec5d6164f0e57fcac52d85a546fa68a9bbdb8120251638","pid":1269,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5c48ec0531aa49f428ec5d6164f0e57fcac52d85a546fa68a9bbdb8120251638","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5c48ec0531aa49f428ec5d6164f0e57fcac52d85a546fa68a9bbdb8120251638/rootfs","created":"2025-11-01T11:25:58.450669612Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"5c48ec0531aa49f428ec5d6164f0e57fcac52d85a546fa68a9bbdb8120251638","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-kubernetes-upgrade-847244_a578269af95b9cb5f363557cea9f3e5d","io.kubernetes.cri.sandb
ox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-kubernetes-upgrade-847244","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"a578269af95b9cb5f363557cea9f3e5d"},"owner":"root"},{"ociVersion":"1.2.1","id":"6467914707a2e0c8026d1548e395e6b34edfc7d484526354c95d082373c3760a","pid":1306,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6467914707a2e0c8026d1548e395e6b34edfc7d484526354c95d082373c3760a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6467914707a2e0c8026d1548e395e6b34edfc7d484526354c95d082373c3760a/rootfs","created":"2025-11-01T11:25:58.477295029Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"6467914707a2e0c8026d1548e395e6b34edfc7d484526354c95d082373c3760a","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kub
e-apiserver-kubernetes-upgrade-847244_bfceec379067d2bd6b2a14b09422f313","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-kubernetes-upgrade-847244","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"bfceec379067d2bd6b2a14b09422f313"},"owner":"root"},{"ociVersion":"1.2.1","id":"97652e8149019395be64c79b41e32610f71766a2ee12b258616a55ed51a458c9","pid":2012,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/97652e8149019395be64c79b41e32610f71766a2ee12b258616a55ed51a458c9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/97652e8149019395be64c79b41e32610f71766a2ee12b258616a55ed51a458c9/rootfs","created":"2025-11-01T11:26:13.547442914Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"97652e8149019395be64c79b41e32610f71766a2ee12
b258616a55ed51a458c9","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-7dl6j_0ea49f07-4e86-4c3b-adef-0bd6306c148a","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-proxy-7dl6j","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"0ea49f07-4e86-4c3b-adef-0bd6306c148a"},"owner":"root"},{"ociVersion":"1.2.1","id":"a34e35ddd9ca9258f0acabd8a03e13f04dfdd37cf8269bc670f7bc283c0b7f4b","pid":1334,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a34e35ddd9ca9258f0acabd8a03e13f04dfdd37cf8269bc670f7bc283c0b7f4b","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a34e35ddd9ca9258f0acabd8a03e13f04dfdd37cf8269bc670f7bc283c0b7f4b/rootfs","created":"2025-11-01T11:25:58.496426138Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.s
andbox-id":"a34e35ddd9ca9258f0acabd8a03e13f04dfdd37cf8269bc670f7bc283c0b7f4b","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-kubernetes-upgrade-847244_bf0b96b9f841986039f090c08bb885fc","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-kubernetes-upgrade-847244","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"bf0b96b9f841986039f090c08bb885fc"},"owner":"root"},{"ociVersion":"1.2.1","id":"a823c1a9bbdee094d610664d4d6f138aa16c36de8ec1abebcb0bd4751e1b0b0c","pid":1498,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a823c1a9bbdee094d610664d4d6f138aa16c36de8ec1abebcb0bd4751e1b0b0c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a823c1a9bbdee094d610664d4d6f138aa16c36de8ec1abebcb0bd4751e1b0b0c/rootfs","created":"2025-11-01T11:26:00.437479985Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type"
:"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-scheduler:v1.34.1","io.kubernetes.cri.sandbox-id":"5c48ec0531aa49f428ec5d6164f0e57fcac52d85a546fa68a9bbdb8120251638","io.kubernetes.cri.sandbox-name":"kube-scheduler-kubernetes-upgrade-847244","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"a578269af95b9cb5f363557cea9f3e5d"},"owner":"root"},{"ociVersion":"1.2.1","id":"b673032c02fff66e907a867b80e862ea1d996e9938dd37224fdaa4a8fb9dce5f","pid":2032,"status":"created","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b673032c02fff66e907a867b80e862ea1d996e9938dd37224fdaa4a8fb9dce5f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b673032c02fff66e907a867b80e862ea1d996e9938dd37224fdaa4a8fb9dce5f/rootfs","created":"2025-11-01T11:26:13.608748023Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"10000","io.kubernetes.cri.sandbox-cpu-shares":"102
","io.kubernetes.cri.sandbox-id":"b673032c02fff66e907a867b80e862ea1d996e9938dd37224fdaa4a8fb9dce5f","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-w8887_77b865d4-f6b2-4b8a-af2a-6ab1d85c0803","io.kubernetes.cri.sandbox-memory":"52428800","io.kubernetes.cri.sandbox-name":"kindnet-w8887","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"77b865d4-f6b2-4b8a-af2a-6ab1d85c0803"},"owner":"root"},{"ociVersion":"1.2.1","id":"d522708e645872d6e70bd9d222ba40fe4d68daf5b8423357854423cb7c01b08a","pid":1597,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d522708e645872d6e70bd9d222ba40fe4d68daf5b8423357854423cb7c01b08a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d522708e645872d6e70bd9d222ba40fe4d68daf5b8423357854423cb7c01b08a/rootfs","created":"2025-11-01T11:26:03.747272424Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernete
s.cri.image-name":"registry.k8s.io/kube-controller-manager:v1.34.1","io.kubernetes.cri.sandbox-id":"a34e35ddd9ca9258f0acabd8a03e13f04dfdd37cf8269bc670f7bc283c0b7f4b","io.kubernetes.cri.sandbox-name":"kube-controller-manager-kubernetes-upgrade-847244","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"bf0b96b9f841986039f090c08bb885fc"},"owner":"root"},{"ociVersion":"1.2.1","id":"fa54f81df77ce16b2eabae2c3340ee4c71a2414af87f1bc798067bb36e13874b","pid":0,"status":"stopped","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fa54f81df77ce16b2eabae2c3340ee4c71a2414af87f1bc798067bb36e13874b","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fa54f81df77ce16b2eabae2c3340ee4c71a2414af87f1bc798067bb36e13874b/rootfs","created":"2025-11-01T11:25:58.722426621Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/etcd:3.5.9-0","io.kubernetes.cri.sandbox-id":"22bde59d89
21d9502ca8ed26712b96607872b31079ced3630fd463e59c5f63c2","io.kubernetes.cri.sandbox-name":"etcd-kubernetes-upgrade-847244","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"6d4336940423c55c91f8c91949e19e7c"},"owner":"root"}]
	I1101 11:26:23.478037 3018443 cri.go:126] list returned 11 containers
	I1101 11:26:23.478071 3018443 cri.go:129] container: {ID:22bde59d8921d9502ca8ed26712b96607872b31079ced3630fd463e59c5f63c2 Status:stopped}
	I1101 11:26:23.478112 3018443 cri.go:131] skipping 22bde59d8921d9502ca8ed26712b96607872b31079ced3630fd463e59c5f63c2 - not in ps
	I1101 11:26:23.478132 3018443 cri.go:129] container: {ID:3c3fa9534f80f64f5d9d7be10f9d316ca87876d8747dcf0bbd668d8cda5de1dc Status:running}
	I1101 11:26:23.478153 3018443 cri.go:131] skipping 3c3fa9534f80f64f5d9d7be10f9d316ca87876d8747dcf0bbd668d8cda5de1dc - not in ps
	I1101 11:26:23.478188 3018443 cri.go:129] container: {ID:4de1031a74db1b0f5a6369308de477a52ac5fd1f99f87367af744580629f84b5 Status:running}
	I1101 11:26:23.478208 3018443 cri.go:131] skipping 4de1031a74db1b0f5a6369308de477a52ac5fd1f99f87367af744580629f84b5 - not in ps
	I1101 11:26:23.478227 3018443 cri.go:129] container: {ID:5c48ec0531aa49f428ec5d6164f0e57fcac52d85a546fa68a9bbdb8120251638 Status:running}
	I1101 11:26:23.478247 3018443 cri.go:131] skipping 5c48ec0531aa49f428ec5d6164f0e57fcac52d85a546fa68a9bbdb8120251638 - not in ps
	I1101 11:26:23.478282 3018443 cri.go:129] container: {ID:6467914707a2e0c8026d1548e395e6b34edfc7d484526354c95d082373c3760a Status:running}
	I1101 11:26:23.478301 3018443 cri.go:131] skipping 6467914707a2e0c8026d1548e395e6b34edfc7d484526354c95d082373c3760a - not in ps
	I1101 11:26:23.478321 3018443 cri.go:129] container: {ID:97652e8149019395be64c79b41e32610f71766a2ee12b258616a55ed51a458c9 Status:running}
	I1101 11:26:23.478339 3018443 cri.go:131] skipping 97652e8149019395be64c79b41e32610f71766a2ee12b258616a55ed51a458c9 - not in ps
	I1101 11:26:23.478373 3018443 cri.go:129] container: {ID:a34e35ddd9ca9258f0acabd8a03e13f04dfdd37cf8269bc670f7bc283c0b7f4b Status:running}
	I1101 11:26:23.478393 3018443 cri.go:131] skipping a34e35ddd9ca9258f0acabd8a03e13f04dfdd37cf8269bc670f7bc283c0b7f4b - not in ps
	I1101 11:26:23.478410 3018443 cri.go:129] container: {ID:a823c1a9bbdee094d610664d4d6f138aa16c36de8ec1abebcb0bd4751e1b0b0c Status:running}
	I1101 11:26:23.478430 3018443 cri.go:131] skipping a823c1a9bbdee094d610664d4d6f138aa16c36de8ec1abebcb0bd4751e1b0b0c - not in ps
	I1101 11:26:23.478458 3018443 cri.go:129] container: {ID:b673032c02fff66e907a867b80e862ea1d996e9938dd37224fdaa4a8fb9dce5f Status:created}
	I1101 11:26:23.478484 3018443 cri.go:131] skipping b673032c02fff66e907a867b80e862ea1d996e9938dd37224fdaa4a8fb9dce5f - not in ps
	I1101 11:26:23.478504 3018443 cri.go:129] container: {ID:d522708e645872d6e70bd9d222ba40fe4d68daf5b8423357854423cb7c01b08a Status:running}
	I1101 11:26:23.478532 3018443 cri.go:131] skipping d522708e645872d6e70bd9d222ba40fe4d68daf5b8423357854423cb7c01b08a - not in ps
	I1101 11:26:23.478553 3018443 cri.go:129] container: {ID:fa54f81df77ce16b2eabae2c3340ee4c71a2414af87f1bc798067bb36e13874b Status:stopped}
	I1101 11:26:23.478573 3018443 cri.go:131] skipping fa54f81df77ce16b2eabae2c3340ee4c71a2414af87f1bc798067bb36e13874b - not in ps
	I1101 11:26:23.478649 3018443 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 11:26:23.487511 3018443 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 11:26:23.487597 3018443 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 11:26:23.487745 3018443 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 11:26:23.496238 3018443 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 11:26:23.496894 3018443 kubeconfig.go:125] found "kubernetes-upgrade-847244" server: "https://192.168.76.2:8443"
	I1101 11:26:23.497591 3018443 kapi.go:59] client config for kubernetes-upgrade-847244: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/kubernetes-upgrade-847244/client.crt", KeyFile:"/home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/kubernetes-upgrade-847244/client.key", CAFile:"/home/jenkins/minikube-integration/21830-2847530/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8
(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21203d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 11:26:23.498283 3018443 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1101 11:26:23.498367 3018443 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1101 11:26:23.498391 3018443 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1101 11:26:23.498421 3018443 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1101 11:26:23.498449 3018443 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1101 11:26:23.498799 3018443 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 11:26:23.507687 3018443 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1101 11:26:23.507770 3018443 kubeadm.go:602] duration metric: took 20.145234ms to restartPrimaryControlPlane
	I1101 11:26:23.507796 3018443 kubeadm.go:403] duration metric: took 94.197226ms to StartCluster
	I1101 11:26:23.507825 3018443 settings.go:142] acquiring lock: {Name:mk5646e8bf39bd11e3ceea772a0783343ff08308 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:26:23.508000 3018443 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21830-2847530/kubeconfig
	I1101 11:26:23.508730 3018443 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-2847530/kubeconfig: {Name:mk30d6c204d7a4b60522139b4b98bc7edaea9653 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:26:23.508998 3018443 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1101 11:26:23.509481 3018443 config.go:182] Loaded profile config "kubernetes-upgrade-847244": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1101 11:26:23.509478 3018443 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 11:26:23.509561 3018443 addons.go:70] Setting storage-provisioner=true in profile "kubernetes-upgrade-847244"
	I1101 11:26:23.509581 3018443 addons.go:239] Setting addon storage-provisioner=true in "kubernetes-upgrade-847244"
	W1101 11:26:23.509590 3018443 addons.go:248] addon storage-provisioner should already be in state true
	I1101 11:26:23.509617 3018443 host.go:66] Checking if "kubernetes-upgrade-847244" exists ...
	I1101 11:26:23.509731 3018443 addons.go:70] Setting default-storageclass=true in profile "kubernetes-upgrade-847244"
	I1101 11:26:23.509760 3018443 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-847244"
	I1101 11:26:23.510057 3018443 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-847244 --format={{.State.Status}}
	I1101 11:26:23.511912 3018443 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-847244 --format={{.State.Status}}
	I1101 11:26:23.518331 3018443 out.go:179] * Verifying Kubernetes components...
	I1101 11:26:23.522415 3018443 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 11:26:23.550591 3018443 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 11:26:23.553970 3018443 kapi.go:59] client config for kubernetes-upgrade-847244: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/kubernetes-upgrade-847244/client.crt", KeyFile:"/home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/kubernetes-upgrade-847244/client.key", CAFile:"/home/jenkins/minikube-integration/21830-2847530/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8
(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21203d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 11:26:23.554303 3018443 addons.go:239] Setting addon default-storageclass=true in "kubernetes-upgrade-847244"
	W1101 11:26:23.554321 3018443 addons.go:248] addon default-storageclass should already be in state true
	I1101 11:26:23.554348 3018443 host.go:66] Checking if "kubernetes-upgrade-847244" exists ...
	I1101 11:26:23.554856 3018443 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-847244 --format={{.State.Status}}
	I1101 11:26:23.556204 3018443 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 11:26:23.556224 3018443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 11:26:23.556275 3018443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-847244
	I1101 11:26:23.592324 3018443 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37031 SSHKeyPath:/home/jenkins/minikube-integration/21830-2847530/.minikube/machines/kubernetes-upgrade-847244/id_rsa Username:docker}
	I1101 11:26:23.598370 3018443 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 11:26:23.598409 3018443 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 11:26:23.598490 3018443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-847244
	I1101 11:26:23.632087 3018443 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37031 SSHKeyPath:/home/jenkins/minikube-integration/21830-2847530/.minikube/machines/kubernetes-upgrade-847244/id_rsa Username:docker}
	I1101 11:26:23.902036 3018443 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 11:26:23.938892 3018443 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 11:26:23.989425 3018443 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 11:32:24.634664 3018443 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6m0.732520186s)
	W1101 11:32:24.634706 3018443 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "/v1, Resource=serviceaccounts", GroupVersionKind: "/v1, Kind=ServiceAccount"
	Name: "storage-provisioner", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts storage-provisioner)
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "rbac.authorization.k8s.io/v1, Resource=clusterrolebindings", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding"
	Name: "storage-provisioner", Namespace: ""
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io storage-provisioner)
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "rbac.authorization.k8s.io/v1, Resource=roles", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=Role"
	Name: "system:persistent-volume-provisioner", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io system:persistent-volume-provisioner)
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "rbac.authorization.k8s.io/v1, Resource=rolebindings", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=RoleBinding"
	Name: "system:persistent-volume-provisioner", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get rolebindings.rbac.authorization.k8s.io system:persistent-volume-provisioner)
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "/v1, Resource=endpoints", GroupVersionKind: "/v1, Kind=Endpoints"
	Name: "k8s.io-minikube-hostpath", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get endpoints k8s.io-minikube-hostpath)
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "/v1, Resource=pods", GroupVersionKind: "/v1, Kind=Pod"
	Name: "storage-provisioner", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get pods storage-provisioner)
	W1101 11:32:24.634818 3018443 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "/v1, Resource=serviceaccounts", GroupVersionKind: "/v1, Kind=ServiceAccount"
	Name: "storage-provisioner", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts storage-provisioner)
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "rbac.authorization.k8s.io/v1, Resource=clusterrolebindings", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding"
	Name: "storage-provisioner", Namespace: ""
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io storage-provisioner)
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "rbac.authorization.k8s.io/v1, Resource=roles", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=Role"
	Name: "system:persistent-volume-provisioner", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io system:persistent-volume-provisioner)
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "rbac.authorization.k8s.io/v1, Resource=rolebindings", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=RoleBinding"
	Name: "system:persistent-volume-provisioner", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get rolebindings.rbac.authorization.k8s.io system:persistent-volume-provisioner)
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "/v1, Resource=endpoints", GroupVersionKind: "/v1, Kind=Endpoints"
	Name: "k8s.io-minikube-hostpath", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get endpoints k8s.io-minikube-hostpath)
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "/v1, Resource=pods", GroupVersionKind: "/v1, Kind=Pod"
	Name: "storage-provisioner", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get pods storage-provisioner)
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "/v1, Resource=serviceaccounts", GroupVersionKind: "/v1, Kind=ServiceAccount"
	Name: "storage-provisioner", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts storage-provisioner)
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "rbac.authorization.k8s.io/v1, Resource=clusterrolebindings", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding"
	Name: "storage-provisioner", Namespace: ""
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io storage-provisioner)
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "rbac.authorization.k8s.io/v1, Resource=roles", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=Role"
	Name: "system:persistent-volume-provisioner", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io system:persistent-volume-provisioner)
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "rbac.authorization.k8s.io/v1, Resource=rolebindings", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=RoleBinding"
	Name: "system:persistent-volume-provisioner", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get rolebindings.rbac.authorization.k8s.io system:persistent-volume-provisioner)
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "/v1, Resource=endpoints", GroupVersionKind: "/v1, Kind=Endpoints"
	Name: "k8s.io-minikube-hostpath", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get endpoints k8s.io-minikube-hostpath)
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "/v1, Resource=pods", GroupVersionKind: "/v1, Kind=Pod"
	Name: "storage-provisioner", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get pods storage-provisioner)
	]
	I1101 11:32:24.635074 3018443 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6m0.696096314s)
	W1101 11:32:24.635109 3018443 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "storage.k8s.io/v1, Resource=storageclasses", GroupVersionKind: "storage.k8s.io/v1, Kind=StorageClass"
	Name: "standard", Namespace: ""
	from server for: "/etc/kubernetes/addons/storageclass.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get storageclasses.storage.k8s.io standard)
	W1101 11:32:24.635169 3018443 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "storage.k8s.io/v1, Resource=storageclasses", GroupVersionKind: "storage.k8s.io/v1, Kind=StorageClass"
	Name: "standard", Namespace: ""
	from server for: "/etc/kubernetes/addons/storageclass.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get storageclasses.storage.k8s.io standard)
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "storage.k8s.io/v1, Resource=storageclasses", GroupVersionKind: "storage.k8s.io/v1, Kind=StorageClass"
	Name: "standard", Namespace: ""
	from server for: "/etc/kubernetes/addons/storageclass.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get storageclasses.storage.k8s.io standard)
	]
	I1101 11:32:24.635423 3018443 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (6m0.645864249s)
	I1101 11:32:24.635467 3018443 api_server.go:52] waiting for apiserver process to appear ...
	I1101 11:32:24.638459 3018443 out.go:203] 
	I1101 11:32:24.638459 3018443 out.go:179] * Enabled addons: 
	I1101 11:32:24.641362 3018443 addons.go:515] duration metric: took 6m1.131881281s for enable addons: enabled=[]
	W1101 11:32:24.641426 3018443 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1101 11:32:24.641454 3018443 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	* Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1101 11:32:24.641463 3018443 out.go:285] * Related issues:
	* Related issues:
	W1101 11:32:24.641476 3018443 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	  - https://github.com/kubernetes/minikube/issues/4536
	W1101 11:32:24.641498 3018443 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	  - https://github.com/kubernetes/minikube/issues/6014
	I1101 11:32:24.646470 3018443 out.go:203] 

                                                
                                                
** /stderr **
version_upgrade_test.go:277: start after failed upgrade: out/minikube-linux-arm64 start -p kubernetes-upgrade-847244 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: exit status 105
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2025-11-01 11:32:24.684409755 +0000 UTC m=+2980.407204749
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestKubernetesUpgrade]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect kubernetes-upgrade-847244
helpers_test.go:243: (dbg) docker inspect kubernetes-upgrade-847244:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b0087aee007e395a46013b1720b1043f0a3ecdaae5e7d6e777a0e5d1d8577946",
	        "Created": "2025-11-01T11:25:07.511803836Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 3016325,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T11:25:39.281131691Z",
	            "FinishedAt": "2025-11-01T11:25:38.380282697Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/b0087aee007e395a46013b1720b1043f0a3ecdaae5e7d6e777a0e5d1d8577946/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b0087aee007e395a46013b1720b1043f0a3ecdaae5e7d6e777a0e5d1d8577946/hostname",
	        "HostsPath": "/var/lib/docker/containers/b0087aee007e395a46013b1720b1043f0a3ecdaae5e7d6e777a0e5d1d8577946/hosts",
	        "LogPath": "/var/lib/docker/containers/b0087aee007e395a46013b1720b1043f0a3ecdaae5e7d6e777a0e5d1d8577946/b0087aee007e395a46013b1720b1043f0a3ecdaae5e7d6e777a0e5d1d8577946-json.log",
	        "Name": "/kubernetes-upgrade-847244",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "kubernetes-upgrade-847244:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-847244",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b0087aee007e395a46013b1720b1043f0a3ecdaae5e7d6e777a0e5d1d8577946",
	                "LowerDir": "/var/lib/docker/overlay2/be5b8d02394a9e2b69e524291cea3e8886e1f6f1e55cb6e122906487938cd57e-init/diff:/var/lib/docker/overlay2/6ccbdc4e59211c61d83d46bc353aa66c1a8dd6bb2f77e16ffc85d068d750bbe6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/be5b8d02394a9e2b69e524291cea3e8886e1f6f1e55cb6e122906487938cd57e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/be5b8d02394a9e2b69e524291cea3e8886e1f6f1e55cb6e122906487938cd57e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/be5b8d02394a9e2b69e524291cea3e8886e1f6f1e55cb6e122906487938cd57e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-847244",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-847244/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-847244",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-847244",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-847244",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9be4a5aa3db64d7e54b064ad21a66114a4d41b345abf0c9b44057eed3bd4abed",
	            "SandboxKey": "/var/run/docker/netns/9be4a5aa3db6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "37031"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "37032"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "37035"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "37033"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "37034"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-847244": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "36:ad:5b:00:68:7c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "be8624048c1520def18595845f7b8687e3344ccc62c189fd8528cc4275398dd4",
	                    "EndpointID": "88c85930263f0d3aa5c9e301eb8089202671608abcc4e0b93eaf86c33267cabb",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "kubernetes-upgrade-847244",
	                        "b0087aee007e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-847244 -n kubernetes-upgrade-847244
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-847244 -n kubernetes-upgrade-847244: exit status 2 (15.903170285s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-847244 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p kubernetes-upgrade-847244 logs -n 25: (1m1.041100295s)
helpers_test.go:260: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                       ARGS                                                       │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-921290 sudo cat /etc/kubernetes/kubelet.conf                                                           │ cilium-921290            │ jenkins │ v1.37.0 │ 01 Nov 25 11:31 UTC │                     │
	│ ssh     │ -p cilium-921290 sudo cat /var/lib/kubelet/config.yaml                                                           │ cilium-921290            │ jenkins │ v1.37.0 │ 01 Nov 25 11:31 UTC │                     │
	│ ssh     │ -p cilium-921290 sudo systemctl status docker --all --full --no-pager                                            │ cilium-921290            │ jenkins │ v1.37.0 │ 01 Nov 25 11:31 UTC │                     │
	│ ssh     │ -p cilium-921290 sudo systemctl cat docker --no-pager                                                            │ cilium-921290            │ jenkins │ v1.37.0 │ 01 Nov 25 11:31 UTC │                     │
	│ ssh     │ -p cilium-921290 sudo cat /etc/docker/daemon.json                                                                │ cilium-921290            │ jenkins │ v1.37.0 │ 01 Nov 25 11:31 UTC │                     │
	│ ssh     │ -p cilium-921290 sudo docker system info                                                                         │ cilium-921290            │ jenkins │ v1.37.0 │ 01 Nov 25 11:31 UTC │                     │
	│ ssh     │ -p cilium-921290 sudo systemctl status cri-docker --all --full --no-pager                                        │ cilium-921290            │ jenkins │ v1.37.0 │ 01 Nov 25 11:31 UTC │                     │
	│ ssh     │ -p cilium-921290 sudo systemctl cat cri-docker --no-pager                                                        │ cilium-921290            │ jenkins │ v1.37.0 │ 01 Nov 25 11:31 UTC │                     │
	│ ssh     │ -p cilium-921290 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                   │ cilium-921290            │ jenkins │ v1.37.0 │ 01 Nov 25 11:31 UTC │                     │
	│ ssh     │ -p cilium-921290 sudo cat /usr/lib/systemd/system/cri-docker.service                                             │ cilium-921290            │ jenkins │ v1.37.0 │ 01 Nov 25 11:31 UTC │                     │
	│ ssh     │ -p cilium-921290 sudo cri-dockerd --version                                                                      │ cilium-921290            │ jenkins │ v1.37.0 │ 01 Nov 25 11:31 UTC │                     │
	│ ssh     │ -p cilium-921290 sudo systemctl status containerd --all --full --no-pager                                        │ cilium-921290            │ jenkins │ v1.37.0 │ 01 Nov 25 11:31 UTC │                     │
	│ ssh     │ -p cilium-921290 sudo systemctl cat containerd --no-pager                                                        │ cilium-921290            │ jenkins │ v1.37.0 │ 01 Nov 25 11:31 UTC │                     │
	│ ssh     │ -p cilium-921290 sudo cat /lib/systemd/system/containerd.service                                                 │ cilium-921290            │ jenkins │ v1.37.0 │ 01 Nov 25 11:31 UTC │                     │
	│ ssh     │ -p cilium-921290 sudo cat /etc/containerd/config.toml                                                            │ cilium-921290            │ jenkins │ v1.37.0 │ 01 Nov 25 11:31 UTC │                     │
	│ ssh     │ -p cilium-921290 sudo containerd config dump                                                                     │ cilium-921290            │ jenkins │ v1.37.0 │ 01 Nov 25 11:31 UTC │                     │
	│ ssh     │ -p cilium-921290 sudo systemctl status crio --all --full --no-pager                                              │ cilium-921290            │ jenkins │ v1.37.0 │ 01 Nov 25 11:31 UTC │                     │
	│ ssh     │ -p cilium-921290 sudo systemctl cat crio --no-pager                                                              │ cilium-921290            │ jenkins │ v1.37.0 │ 01 Nov 25 11:31 UTC │                     │
	│ ssh     │ -p cilium-921290 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                    │ cilium-921290            │ jenkins │ v1.37.0 │ 01 Nov 25 11:31 UTC │                     │
	│ ssh     │ -p cilium-921290 sudo crio config                                                                                │ cilium-921290            │ jenkins │ v1.37.0 │ 01 Nov 25 11:31 UTC │                     │
	│ delete  │ -p cilium-921290                                                                                                 │ cilium-921290            │ jenkins │ v1.37.0 │ 01 Nov 25 11:31 UTC │ 01 Nov 25 11:31 UTC │
	│ start   │ -p force-systemd-env-686320 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd │ force-systemd-env-686320 │ jenkins │ v1.37.0 │ 01 Nov 25 11:31 UTC │ 01 Nov 25 11:31 UTC │
	│ ssh     │ force-systemd-env-686320 ssh cat /etc/containerd/config.toml                                                     │ force-systemd-env-686320 │ jenkins │ v1.37.0 │ 01 Nov 25 11:31 UTC │ 01 Nov 25 11:31 UTC │
	│ delete  │ -p force-systemd-env-686320                                                                                      │ force-systemd-env-686320 │ jenkins │ v1.37.0 │ 01 Nov 25 11:31 UTC │ 01 Nov 25 11:31 UTC │
	│ start   │ -p cert-expiration-409334 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd     │ cert-expiration-409334   │ jenkins │ v1.37.0 │ 01 Nov 25 11:31 UTC │ 01 Nov 25 11:32 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 11:31:53
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 11:31:53.918821 3041369 out.go:360] Setting OutFile to fd 1 ...
	I1101 11:31:53.918914 3041369 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 11:31:53.918918 3041369 out.go:374] Setting ErrFile to fd 2...
	I1101 11:31:53.918921 3041369 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 11:31:53.919177 3041369 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-2847530/.minikube/bin
	I1101 11:31:53.919639 3041369 out.go:368] Setting JSON to false
	I1101 11:31:53.920758 3041369 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":72860,"bootTime":1761923854,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1101 11:31:53.920816 3041369 start.go:143] virtualization:  
	I1101 11:31:53.924385 3041369 out.go:179] * [cert-expiration-409334] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 11:31:53.928927 3041369 out.go:179]   - MINIKUBE_LOCATION=21830
	I1101 11:31:53.928995 3041369 notify.go:221] Checking for updates...
	I1101 11:31:53.935481 3041369 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 11:31:53.938912 3041369 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21830-2847530/kubeconfig
	I1101 11:31:53.942202 3041369 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-2847530/.minikube
	I1101 11:31:53.945471 3041369 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 11:31:53.948618 3041369 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 11:31:53.952174 3041369 config.go:182] Loaded profile config "kubernetes-upgrade-847244": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1101 11:31:53.952301 3041369 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 11:31:53.982002 3041369 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 11:31:53.982116 3041369 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 11:31:54.048364 3041369 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 11:31:54.03854965 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 11:31:54.048461 3041369 docker.go:319] overlay module found
	I1101 11:31:54.051733 3041369 out.go:179] * Using the docker driver based on user configuration
	I1101 11:31:54.054707 3041369 start.go:309] selected driver: docker
	I1101 11:31:54.054717 3041369 start.go:930] validating driver "docker" against <nil>
	I1101 11:31:54.054729 3041369 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 11:31:54.055482 3041369 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 11:31:54.113137 3041369 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 11:31:54.103611035 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 11:31:54.113285 3041369 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 11:31:54.113503 3041369 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1101 11:31:54.116510 3041369 out.go:179] * Using Docker driver with root privileges
	I1101 11:31:54.119295 3041369 cni.go:84] Creating CNI manager for ""
	I1101 11:31:54.119354 3041369 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1101 11:31:54.119361 3041369 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 11:31:54.119449 3041369 start.go:353] cluster config:
	{Name:cert-expiration-409334 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-409334 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 11:31:54.122609 3041369 out.go:179] * Starting "cert-expiration-409334" primary control-plane node in "cert-expiration-409334" cluster
	I1101 11:31:54.125510 3041369 cache.go:124] Beginning downloading kic base image for docker with containerd
	I1101 11:31:54.128425 3041369 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 11:31:54.131210 3041369 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1101 11:31:54.131254 3041369 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21830-2847530/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	I1101 11:31:54.131260 3041369 cache.go:59] Caching tarball of preloaded images
	I1101 11:31:54.131300 3041369 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 11:31:54.131350 3041369 preload.go:233] Found /home/jenkins/minikube-integration/21830-2847530/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1101 11:31:54.131358 3041369 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1101 11:31:54.131463 3041369 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/cert-expiration-409334/config.json ...
	I1101 11:31:54.131478 3041369 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/cert-expiration-409334/config.json: {Name:mk3954a2c258cf3021f25834e9aa103e4186f56a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:31:54.150013 3041369 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 11:31:54.150023 3041369 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 11:31:54.150040 3041369 cache.go:233] Successfully downloaded all kic artifacts
	I1101 11:31:54.150061 3041369 start.go:360] acquireMachinesLock for cert-expiration-409334: {Name:mk3648670f09d67740e8c0097fb6e825276a1023 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 11:31:54.150160 3041369 start.go:364] duration metric: took 86.102µs to acquireMachinesLock for "cert-expiration-409334"
	I1101 11:31:54.150183 3041369 start.go:93] Provisioning new machine with config: &{Name:cert-expiration-409334 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-409334 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1101 11:31:54.150243 3041369 start.go:125] createHost starting for "" (driver="docker")
	I1101 11:31:54.153591 3041369 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1101 11:31:54.153789 3041369 start.go:159] libmachine.API.Create for "cert-expiration-409334" (driver="docker")
	I1101 11:31:54.153822 3041369 client.go:173] LocalClient.Create starting
	I1101 11:31:54.153894 3041369 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21830-2847530/.minikube/certs/ca.pem
	I1101 11:31:54.153926 3041369 main.go:143] libmachine: Decoding PEM data...
	I1101 11:31:54.153938 3041369 main.go:143] libmachine: Parsing certificate...
	I1101 11:31:54.153990 3041369 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21830-2847530/.minikube/certs/cert.pem
	I1101 11:31:54.154005 3041369 main.go:143] libmachine: Decoding PEM data...
	I1101 11:31:54.154017 3041369 main.go:143] libmachine: Parsing certificate...
	I1101 11:31:54.154367 3041369 cli_runner.go:164] Run: docker network inspect cert-expiration-409334 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1101 11:31:54.169788 3041369 cli_runner.go:211] docker network inspect cert-expiration-409334 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1101 11:31:54.169867 3041369 network_create.go:284] running [docker network inspect cert-expiration-409334] to gather additional debugging logs...
	I1101 11:31:54.169881 3041369 cli_runner.go:164] Run: docker network inspect cert-expiration-409334
	W1101 11:31:54.185007 3041369 cli_runner.go:211] docker network inspect cert-expiration-409334 returned with exit code 1
	I1101 11:31:54.185027 3041369 network_create.go:287] error running [docker network inspect cert-expiration-409334]: docker network inspect cert-expiration-409334: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network cert-expiration-409334 not found
	I1101 11:31:54.185038 3041369 network_create.go:289] output of [docker network inspect cert-expiration-409334]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network cert-expiration-409334 not found
	
	** /stderr **
	I1101 11:31:54.185147 3041369 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 11:31:54.199620 3041369 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1006bc31d72c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8a:c0:fc:76:40:11} reservation:<nil>}
	I1101 11:31:54.200059 3041369 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-30375b488c94 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:d2:42:fc:67:63:17} reservation:<nil>}
	I1101 11:31:54.200434 3041369 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-d38d25cc586e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:52:86:0f:3a:f9:c2} reservation:<nil>}
	I1101 11:31:54.200638 3041369 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-be8624048c15 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:5e:2e:e8:21:ca:73} reservation:<nil>}
	I1101 11:31:54.201053 3041369 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a116c0}
	I1101 11:31:54.201066 3041369 network_create.go:124] attempt to create docker network cert-expiration-409334 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1101 11:31:54.201118 3041369 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cert-expiration-409334 cert-expiration-409334
	I1101 11:31:54.259496 3041369 network_create.go:108] docker network cert-expiration-409334 192.168.85.0/24 created
	I1101 11:31:54.259517 3041369 kic.go:121] calculated static IP "192.168.85.2" for the "cert-expiration-409334" container
	I1101 11:31:54.259585 3041369 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1101 11:31:54.276401 3041369 cli_runner.go:164] Run: docker volume create cert-expiration-409334 --label name.minikube.sigs.k8s.io=cert-expiration-409334 --label created_by.minikube.sigs.k8s.io=true
	I1101 11:31:54.293845 3041369 oci.go:103] Successfully created a docker volume cert-expiration-409334
	I1101 11:31:54.293924 3041369 cli_runner.go:164] Run: docker run --rm --name cert-expiration-409334-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-expiration-409334 --entrypoint /usr/bin/test -v cert-expiration-409334:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1101 11:31:54.797756 3041369 oci.go:107] Successfully prepared a docker volume cert-expiration-409334
	I1101 11:31:54.797808 3041369 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1101 11:31:54.797827 3041369 kic.go:194] Starting extracting preloaded images to volume ...
	I1101 11:31:54.797900 3041369 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21830-2847530/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v cert-expiration-409334:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1101 11:31:59.026378 3041369 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21830-2847530/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v cert-expiration-409334:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.228440062s)
	I1101 11:31:59.026403 3041369 kic.go:203] duration metric: took 4.228569305s to extract preloaded images to volume ...
	W1101 11:31:59.026576 3041369 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1101 11:31:59.026704 3041369 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 11:31:59.094366 3041369 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cert-expiration-409334 --name cert-expiration-409334 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-expiration-409334 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cert-expiration-409334 --network cert-expiration-409334 --ip 192.168.85.2 --volume cert-expiration-409334:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1101 11:31:59.392670 3041369 cli_runner.go:164] Run: docker container inspect cert-expiration-409334 --format={{.State.Running}}
	I1101 11:31:59.414538 3041369 cli_runner.go:164] Run: docker container inspect cert-expiration-409334 --format={{.State.Status}}
	I1101 11:31:59.436891 3041369 cli_runner.go:164] Run: docker exec cert-expiration-409334 stat /var/lib/dpkg/alternatives/iptables
	I1101 11:31:59.489111 3041369 oci.go:144] the created container "cert-expiration-409334" has a running status.
	I1101 11:31:59.489130 3041369 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21830-2847530/.minikube/machines/cert-expiration-409334/id_rsa...
	I1101 11:32:00.386015 3041369 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21830-2847530/.minikube/machines/cert-expiration-409334/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 11:32:00.414642 3041369 cli_runner.go:164] Run: docker container inspect cert-expiration-409334 --format={{.State.Status}}
	I1101 11:32:00.443814 3041369 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 11:32:00.443828 3041369 kic_runner.go:114] Args: [docker exec --privileged cert-expiration-409334 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 11:32:00.525753 3041369 cli_runner.go:164] Run: docker container inspect cert-expiration-409334 --format={{.State.Status}}
	I1101 11:32:00.548739 3041369 machine.go:94] provisionDockerMachine start ...
	I1101 11:32:00.548842 3041369 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-409334
	I1101 11:32:00.567898 3041369 main.go:143] libmachine: Using SSH client type: native
	I1101 11:32:00.568238 3041369 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 37066 <nil> <nil>}
	I1101 11:32:00.568245 3041369 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 11:32:00.731320 3041369 main.go:143] libmachine: SSH cmd err, output: <nil>: cert-expiration-409334
	
	I1101 11:32:00.731334 3041369 ubuntu.go:182] provisioning hostname "cert-expiration-409334"
	I1101 11:32:00.731398 3041369 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-409334
	I1101 11:32:00.752035 3041369 main.go:143] libmachine: Using SSH client type: native
	I1101 11:32:00.752336 3041369 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 37066 <nil> <nil>}
	I1101 11:32:00.752345 3041369 main.go:143] libmachine: About to run SSH command:
	sudo hostname cert-expiration-409334 && echo "cert-expiration-409334" | sudo tee /etc/hostname
	I1101 11:32:00.926382 3041369 main.go:143] libmachine: SSH cmd err, output: <nil>: cert-expiration-409334
	
	I1101 11:32:00.926461 3041369 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-409334
	I1101 11:32:00.944788 3041369 main.go:143] libmachine: Using SSH client type: native
	I1101 11:32:00.945080 3041369 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 37066 <nil> <nil>}
	I1101 11:32:00.945096 3041369 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-409334' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-409334/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-409334' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 11:32:01.100403 3041369 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 11:32:01.100423 3041369 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21830-2847530/.minikube CaCertPath:/home/jenkins/minikube-integration/21830-2847530/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21830-2847530/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21830-2847530/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21830-2847530/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21830-2847530/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21830-2847530/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21830-2847530/.minikube}
	I1101 11:32:01.100440 3041369 ubuntu.go:190] setting up certificates
	I1101 11:32:01.100449 3041369 provision.go:84] configureAuth start
	I1101 11:32:01.100514 3041369 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-409334
	I1101 11:32:01.119966 3041369 provision.go:143] copyHostCerts
	I1101 11:32:01.120021 3041369 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-2847530/.minikube/ca.pem, removing ...
	I1101 11:32:01.120028 3041369 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-2847530/.minikube/ca.pem
	I1101 11:32:01.120111 3041369 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-2847530/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21830-2847530/.minikube/ca.pem (1082 bytes)
	I1101 11:32:01.120211 3041369 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-2847530/.minikube/cert.pem, removing ...
	I1101 11:32:01.120215 3041369 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-2847530/.minikube/cert.pem
	I1101 11:32:01.120239 3041369 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-2847530/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21830-2847530/.minikube/cert.pem (1123 bytes)
	I1101 11:32:01.120310 3041369 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-2847530/.minikube/key.pem, removing ...
	I1101 11:32:01.120314 3041369 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-2847530/.minikube/key.pem
	I1101 11:32:01.120338 3041369 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-2847530/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21830-2847530/.minikube/key.pem (1675 bytes)
	I1101 11:32:01.120395 3041369 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21830-2847530/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21830-2847530/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21830-2847530/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-409334 san=[127.0.0.1 192.168.85.2 cert-expiration-409334 localhost minikube]
	I1101 11:32:01.538193 3041369 provision.go:177] copyRemoteCerts
	I1101 11:32:01.538252 3041369 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 11:32:01.538295 3041369 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-409334
	I1101 11:32:01.562710 3041369 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37066 SSHKeyPath:/home/jenkins/minikube-integration/21830-2847530/.minikube/machines/cert-expiration-409334/id_rsa Username:docker}
	I1101 11:32:01.672221 3041369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-2847530/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 11:32:01.693280 3041369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-2847530/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1101 11:32:01.712437 3041369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-2847530/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 11:32:01.734761 3041369 provision.go:87] duration metric: took 634.299242ms to configureAuth
	I1101 11:32:01.734779 3041369 ubuntu.go:206] setting minikube options for container-runtime
	I1101 11:32:01.734985 3041369 config.go:182] Loaded profile config "cert-expiration-409334": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1101 11:32:01.734991 3041369 machine.go:97] duration metric: took 1.186241973s to provisionDockerMachine
	I1101 11:32:01.734996 3041369 client.go:176] duration metric: took 7.581170453s to LocalClient.Create
	I1101 11:32:01.735017 3041369 start.go:167] duration metric: took 7.581228051s to libmachine.API.Create "cert-expiration-409334"
	I1101 11:32:01.735023 3041369 start.go:293] postStartSetup for "cert-expiration-409334" (driver="docker")
	I1101 11:32:01.735031 3041369 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 11:32:01.735081 3041369 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 11:32:01.735120 3041369 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-409334
	I1101 11:32:01.753339 3041369 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37066 SSHKeyPath:/home/jenkins/minikube-integration/21830-2847530/.minikube/machines/cert-expiration-409334/id_rsa Username:docker}
	I1101 11:32:01.860450 3041369 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 11:32:01.863779 3041369 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 11:32:01.863798 3041369 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 11:32:01.863808 3041369 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-2847530/.minikube/addons for local assets ...
	I1101 11:32:01.863899 3041369 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-2847530/.minikube/files for local assets ...
	I1101 11:32:01.863987 3041369 filesync.go:149] local asset: /home/jenkins/minikube-integration/21830-2847530/.minikube/files/etc/ssl/certs/28494222.pem -> 28494222.pem in /etc/ssl/certs
	I1101 11:32:01.864088 3041369 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 11:32:01.871784 3041369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-2847530/.minikube/files/etc/ssl/certs/28494222.pem --> /etc/ssl/certs/28494222.pem (1708 bytes)
	I1101 11:32:01.891944 3041369 start.go:296] duration metric: took 156.908717ms for postStartSetup
	I1101 11:32:01.892347 3041369 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-409334
	I1101 11:32:01.911585 3041369 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/cert-expiration-409334/config.json ...
	I1101 11:32:01.911909 3041369 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 11:32:01.911952 3041369 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-409334
	I1101 11:32:01.929842 3041369 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37066 SSHKeyPath:/home/jenkins/minikube-integration/21830-2847530/.minikube/machines/cert-expiration-409334/id_rsa Username:docker}
	I1101 11:32:02.033315 3041369 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 11:32:02.038390 3041369 start.go:128] duration metric: took 7.888132674s to createHost
	I1101 11:32:02.038406 3041369 start.go:83] releasing machines lock for "cert-expiration-409334", held for 7.888239076s
	I1101 11:32:02.038480 3041369 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-409334
	I1101 11:32:02.056939 3041369 ssh_runner.go:195] Run: cat /version.json
	I1101 11:32:02.056985 3041369 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-409334
	I1101 11:32:02.057020 3041369 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 11:32:02.057074 3041369 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-409334
	I1101 11:32:02.077753 3041369 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37066 SSHKeyPath:/home/jenkins/minikube-integration/21830-2847530/.minikube/machines/cert-expiration-409334/id_rsa Username:docker}
	I1101 11:32:02.093677 3041369 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37066 SSHKeyPath:/home/jenkins/minikube-integration/21830-2847530/.minikube/machines/cert-expiration-409334/id_rsa Username:docker}
	I1101 11:32:02.274415 3041369 ssh_runner.go:195] Run: systemctl --version
	I1101 11:32:02.281888 3041369 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 11:32:02.286518 3041369 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 11:32:02.286602 3041369 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 11:32:02.317849 3041369 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1101 11:32:02.317862 3041369 start.go:496] detecting cgroup driver to use...
	I1101 11:32:02.317892 3041369 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1101 11:32:02.317943 3041369 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1101 11:32:02.333845 3041369 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1101 11:32:02.347218 3041369 docker.go:218] disabling cri-docker service (if available) ...
	I1101 11:32:02.347271 3041369 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 11:32:02.366979 3041369 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 11:32:02.386789 3041369 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 11:32:02.513023 3041369 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 11:32:02.652014 3041369 docker.go:234] disabling docker service ...
	I1101 11:32:02.652095 3041369 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 11:32:02.675164 3041369 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 11:32:02.689032 3041369 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 11:32:02.818711 3041369 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 11:32:02.945005 3041369 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 11:32:02.958659 3041369 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 11:32:02.973708 3041369 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1101 11:32:02.982765 3041369 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1101 11:32:02.991628 3041369 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1101 11:32:02.991687 3041369 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1101 11:32:03.002749 3041369 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1101 11:32:03.013169 3041369 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1101 11:32:03.022557 3041369 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1101 11:32:03.031633 3041369 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 11:32:03.040731 3041369 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1101 11:32:03.049375 3041369 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1101 11:32:03.057816 3041369 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1101 11:32:03.066319 3041369 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 11:32:03.074219 3041369 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 11:32:03.081538 3041369 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 11:32:03.202822 3041369 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1101 11:32:03.347844 3041369 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1101 11:32:03.347930 3041369 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1101 11:32:03.351695 3041369 start.go:564] Will wait 60s for crictl version
	I1101 11:32:03.351764 3041369 ssh_runner.go:195] Run: which crictl
	I1101 11:32:03.355085 3041369 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 11:32:03.384437 3041369 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.28
	RuntimeApiVersion:  v1
	I1101 11:32:03.384494 3041369 ssh_runner.go:195] Run: containerd --version
	I1101 11:32:03.408302 3041369 ssh_runner.go:195] Run: containerd --version
	I1101 11:32:03.435492 3041369 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 1.7.28 ...
	I1101 11:32:03.438545 3041369 cli_runner.go:164] Run: docker network inspect cert-expiration-409334 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 11:32:03.454967 3041369 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1101 11:32:03.458944 3041369 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 11:32:03.469993 3041369 kubeadm.go:884] updating cluster {Name:cert-expiration-409334 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-409334 Namespace:default APIServerHAVIP: APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 11:32:03.470095 3041369 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1101 11:32:03.470150 3041369 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 11:32:03.496155 3041369 containerd.go:627] all images are preloaded for containerd runtime.
	I1101 11:32:03.496166 3041369 containerd.go:534] Images already preloaded, skipping extraction
	I1101 11:32:03.496225 3041369 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 11:32:03.524466 3041369 containerd.go:627] all images are preloaded for containerd runtime.
	I1101 11:32:03.524478 3041369 cache_images.go:86] Images are preloaded, skipping loading
	I1101 11:32:03.524488 3041369 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 containerd true true} ...
	I1101 11:32:03.524586 3041369 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=cert-expiration-409334 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-409334 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 11:32:03.524649 3041369 ssh_runner.go:195] Run: sudo crictl info
	I1101 11:32:03.554636 3041369 cni.go:84] Creating CNI manager for ""
	I1101 11:32:03.554648 3041369 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1101 11:32:03.554662 3041369 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 11:32:03.554688 3041369 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-409334 NodeName:cert-expiration-409334 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 11:32:03.554798 3041369 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "cert-expiration-409334"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 11:32:03.554864 3041369 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 11:32:03.562676 3041369 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 11:32:03.562775 3041369 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 11:32:03.570526 3041369 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1101 11:32:03.583262 3041369 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 11:32:03.596372 3041369 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1101 11:32:03.611039 3041369 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1101 11:32:03.614686 3041369 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 11:32:03.624713 3041369 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 11:32:03.748735 3041369 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 11:32:03.764865 3041369 certs.go:69] Setting up /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/cert-expiration-409334 for IP: 192.168.85.2
	I1101 11:32:03.764877 3041369 certs.go:195] generating shared ca certs ...
	I1101 11:32:03.764892 3041369 certs.go:227] acquiring lock for ca certs: {Name:mkb1fca73e716ecaa17fb23194b5757ed73c3505 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:32:03.765052 3041369 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21830-2847530/.minikube/ca.key
	I1101 11:32:03.765101 3041369 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21830-2847530/.minikube/proxy-client-ca.key
	I1101 11:32:03.765107 3041369 certs.go:257] generating profile certs ...
	I1101 11:32:03.765168 3041369 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/cert-expiration-409334/client.key
	I1101 11:32:03.765179 3041369 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/cert-expiration-409334/client.crt with IP's: []
	I1101 11:32:04.136747 3041369 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/cert-expiration-409334/client.crt ...
	I1101 11:32:04.136763 3041369 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/cert-expiration-409334/client.crt: {Name:mk7c7addd19cb2f93abca9b4c5603a500f5188ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:32:04.136965 3041369 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/cert-expiration-409334/client.key ...
	I1101 11:32:04.136974 3041369 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/cert-expiration-409334/client.key: {Name:mk094e4e820612a9c877abd2dc2a12d43998cf97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:32:04.137064 3041369 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/cert-expiration-409334/apiserver.key.520fb7a3
	I1101 11:32:04.137081 3041369 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/cert-expiration-409334/apiserver.crt.520fb7a3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1101 11:32:05.470964 3041369 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/cert-expiration-409334/apiserver.crt.520fb7a3 ...
	I1101 11:32:05.470979 3041369 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/cert-expiration-409334/apiserver.crt.520fb7a3: {Name:mk78ef20bd13cb8fa4b32dca1f99c630ba07b771 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:32:05.471169 3041369 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/cert-expiration-409334/apiserver.key.520fb7a3 ...
	I1101 11:32:05.471176 3041369 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/cert-expiration-409334/apiserver.key.520fb7a3: {Name:mk45af7ae914b0fbed1a5ad03ee85d2cbd57ec46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:32:05.471254 3041369 certs.go:382] copying /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/cert-expiration-409334/apiserver.crt.520fb7a3 -> /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/cert-expiration-409334/apiserver.crt
	I1101 11:32:05.471326 3041369 certs.go:386] copying /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/cert-expiration-409334/apiserver.key.520fb7a3 -> /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/cert-expiration-409334/apiserver.key
	I1101 11:32:05.471377 3041369 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/cert-expiration-409334/proxy-client.key
	I1101 11:32:05.471389 3041369 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/cert-expiration-409334/proxy-client.crt with IP's: []
	I1101 11:32:06.630525 3041369 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/cert-expiration-409334/proxy-client.crt ...
	I1101 11:32:06.630541 3041369 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/cert-expiration-409334/proxy-client.crt: {Name:mkc8e297b1d343ad6e92c3ff02cc24489cc781bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:32:06.630726 3041369 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/cert-expiration-409334/proxy-client.key ...
	I1101 11:32:06.630734 3041369 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/cert-expiration-409334/proxy-client.key: {Name:mk53ac3db2067d920c9f42cfb7c5dd0f13b6ae17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:32:06.630916 3041369 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-2847530/.minikube/certs/2849422.pem (1338 bytes)
	W1101 11:32:06.630965 3041369 certs.go:480] ignoring /home/jenkins/minikube-integration/21830-2847530/.minikube/certs/2849422_empty.pem, impossibly tiny 0 bytes
	I1101 11:32:06.630980 3041369 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-2847530/.minikube/certs/ca-key.pem (1679 bytes)
	I1101 11:32:06.631006 3041369 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-2847530/.minikube/certs/ca.pem (1082 bytes)
	I1101 11:32:06.631033 3041369 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-2847530/.minikube/certs/cert.pem (1123 bytes)
	I1101 11:32:06.631053 3041369 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-2847530/.minikube/certs/key.pem (1675 bytes)
	I1101 11:32:06.631092 3041369 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-2847530/.minikube/files/etc/ssl/certs/28494222.pem (1708 bytes)
	I1101 11:32:06.631769 3041369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-2847530/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 11:32:06.650532 3041369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-2847530/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 11:32:06.670131 3041369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-2847530/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 11:32:06.690600 3041369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-2847530/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 11:32:06.708749 3041369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/cert-expiration-409334/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1101 11:32:06.726764 3041369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/cert-expiration-409334/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 11:32:06.744446 3041369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/cert-expiration-409334/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 11:32:06.767270 3041369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/cert-expiration-409334/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 11:32:06.787727 3041369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-2847530/.minikube/files/etc/ssl/certs/28494222.pem --> /usr/share/ca-certificates/28494222.pem (1708 bytes)
	I1101 11:32:06.808212 3041369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-2847530/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 11:32:06.827454 3041369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-2847530/.minikube/certs/2849422.pem --> /usr/share/ca-certificates/2849422.pem (1338 bytes)
	I1101 11:32:06.844517 3041369 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 11:32:06.857343 3041369 ssh_runner.go:195] Run: openssl version
	I1101 11:32:06.863485 3041369 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/28494222.pem && ln -fs /usr/share/ca-certificates/28494222.pem /etc/ssl/certs/28494222.pem"
	I1101 11:32:06.872117 3041369 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/28494222.pem
	I1101 11:32:06.875939 3041369 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 10:50 /usr/share/ca-certificates/28494222.pem
	I1101 11:32:06.876013 3041369 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/28494222.pem
	I1101 11:32:06.916791 3041369 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/28494222.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 11:32:06.925125 3041369 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 11:32:06.933440 3041369 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:32:06.937190 3041369 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 10:43 /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:32:06.937262 3041369 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:32:06.983063 3041369 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 11:32:06.992214 3041369 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2849422.pem && ln -fs /usr/share/ca-certificates/2849422.pem /etc/ssl/certs/2849422.pem"
	I1101 11:32:07.001279 3041369 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2849422.pem
	I1101 11:32:07.005610 3041369 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 10:50 /usr/share/ca-certificates/2849422.pem
	I1101 11:32:07.005665 3041369 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2849422.pem
	I1101 11:32:07.046956 3041369 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2849422.pem /etc/ssl/certs/51391683.0"
	I1101 11:32:07.055219 3041369 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 11:32:07.058726 3041369 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 11:32:07.058768 3041369 kubeadm.go:401] StartCluster: {Name:cert-expiration-409334 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-409334 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 11:32:07.058828 3041369 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1101 11:32:07.058889 3041369 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 11:32:07.085309 3041369 cri.go:89] found id: ""
	I1101 11:32:07.085379 3041369 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 11:32:07.094128 3041369 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 11:32:07.101999 3041369 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1101 11:32:07.102051 3041369 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 11:32:07.109565 3041369 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 11:32:07.109573 3041369 kubeadm.go:158] found existing configuration files:
	
	I1101 11:32:07.109634 3041369 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 11:32:07.117311 3041369 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 11:32:07.117372 3041369 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 11:32:07.124819 3041369 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 11:32:07.133144 3041369 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 11:32:07.133199 3041369 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 11:32:07.140387 3041369 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 11:32:07.147924 3041369 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 11:32:07.147976 3041369 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 11:32:07.155576 3041369 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 11:32:07.163093 3041369 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 11:32:07.163151 3041369 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 11:32:07.170537 3041369 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 11:32:07.213262 3041369 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 11:32:07.213495 3041369 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 11:32:07.236119 3041369 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1101 11:32:07.236197 3041369 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1101 11:32:07.236239 3041369 kubeadm.go:319] OS: Linux
	I1101 11:32:07.236306 3041369 kubeadm.go:319] CGROUPS_CPU: enabled
	I1101 11:32:07.236358 3041369 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1101 11:32:07.236412 3041369 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1101 11:32:07.236471 3041369 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1101 11:32:07.236522 3041369 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1101 11:32:07.236572 3041369 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1101 11:32:07.236629 3041369 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1101 11:32:07.236692 3041369 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1101 11:32:07.236740 3041369 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1101 11:32:07.304179 3041369 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 11:32:07.304309 3041369 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 11:32:07.304424 3041369 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 11:32:07.309646 3041369 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 11:32:07.315783 3041369 out.go:252]   - Generating certificates and keys ...
	I1101 11:32:07.315901 3041369 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 11:32:07.315973 3041369 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 11:32:07.741427 3041369 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 11:32:08.382636 3041369 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 11:32:08.914817 3041369 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 11:32:09.394970 3041369 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 11:32:10.455664 3041369 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 11:32:10.456181 3041369 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [cert-expiration-409334 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1101 11:32:10.944085 3041369 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 11:32:10.944621 3041369 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [cert-expiration-409334 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1101 11:32:11.115960 3041369 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 11:32:12.371004 3041369 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 11:32:13.692394 3041369 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 11:32:13.692636 3041369 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 11:32:14.160359 3041369 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 11:32:14.620557 3041369 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1101 11:32:14.701532 3041369 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 11:32:15.976249 3041369 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 11:32:16.845218 3041369 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 11:32:16.845761 3041369 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 11:32:16.848491 3041369 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 11:32:16.851985 3041369 out.go:252]   - Booting up control plane ...
	I1101 11:32:16.852094 3041369 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 11:32:16.852176 3041369 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 11:32:16.852245 3041369 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 11:32:16.884277 3041369 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 11:32:16.884379 3041369 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1101 11:32:16.893610 3041369 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1101 11:32:16.893988 3041369 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 11:32:16.894264 3041369 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 11:32:17.040325 3041369 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1101 11:32:17.040437 3041369 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1101 11:32:18.040838 3041369 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001956263s
	I1101 11:32:18.043991 3041369 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1101 11:32:18.044078 3041369 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1101 11:32:18.044286 3041369 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1101 11:32:18.044542 3041369 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1101 11:32:21.789614 3041369 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.744764949s
	I1101 11:32:23.521588 3041369 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.476583768s
	I1101 11:32:24.545979 3041369 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.501773793s
	I1101 11:32:24.566643 3041369 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 11:32:24.583829 3041369 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 11:32:24.599294 3041369 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 11:32:24.599490 3041369 kubeadm.go:319] [mark-control-plane] Marking the node cert-expiration-409334 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 11:32:24.611263 3041369 kubeadm.go:319] [bootstrap-token] Using token: wu4dwd.yqgvc7pr0ub9n1b7
	I1101 11:32:24.634664 3018443 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6m0.732520186s)
	W1101 11:32:24.634706 3018443 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "/v1, Resource=serviceaccounts", GroupVersionKind: "/v1, Kind=ServiceAccount"
	Name: "storage-provisioner", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts storage-provisioner)
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "rbac.authorization.k8s.io/v1, Resource=clusterrolebindings", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding"
	Name: "storage-provisioner", Namespace: ""
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io storage-provisioner)
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "rbac.authorization.k8s.io/v1, Resource=roles", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=Role"
	Name: "system:persistent-volume-provisioner", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io system:persistent-volume-provisioner)
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "rbac.authorization.k8s.io/v1, Resource=rolebindings", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=RoleBinding"
	Name: "system:persistent-volume-provisioner", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get rolebindings.rbac.authorization.k8s.io system:persistent-volume-provisioner)
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "/v1, Resource=endpoints", GroupVersionKind: "/v1, Kind=Endpoints"
	Name: "k8s.io-minikube-hostpath", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get endpoints k8s.io-minikube-hostpath)
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "/v1, Resource=pods", GroupVersionKind: "/v1, Kind=Pod"
	Name: "storage-provisioner", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get pods storage-provisioner)
	W1101 11:32:24.634818 3018443 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "/v1, Resource=serviceaccounts", GroupVersionKind: "/v1, Kind=ServiceAccount"
	Name: "storage-provisioner", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts storage-provisioner)
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "rbac.authorization.k8s.io/v1, Resource=clusterrolebindings", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding"
	Name: "storage-provisioner", Namespace: ""
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io storage-provisioner)
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "rbac.authorization.k8s.io/v1, Resource=roles", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=Role"
	Name: "system:persistent-volume-provisioner", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io system:persistent-volume-provisioner)
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "rbac.authorization.k8s.io/v1, Resource=rolebindings", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=RoleBinding"
	Name: "system:persistent-volume-provisioner", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get rolebindings.rbac.authorization.k8s.io system:persistent-volume-provisioner)
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "/v1, Resource=endpoints", GroupVersionKind: "/v1, Kind=Endpoints"
	Name: "k8s.io-minikube-hostpath", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get endpoints k8s.io-minikube-hostpath)
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "/v1, Resource=pods", GroupVersionKind: "/v1, Kind=Pod"
	Name: "storage-provisioner", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get pods storage-provisioner)
	]
	I1101 11:32:24.635074 3018443 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6m0.696096314s)
	W1101 11:32:24.635109 3018443 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "storage.k8s.io/v1, Resource=storageclasses", GroupVersionKind: "storage.k8s.io/v1, Kind=StorageClass"
	Name: "standard", Namespace: ""
	from server for: "/etc/kubernetes/addons/storageclass.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get storageclasses.storage.k8s.io standard)
	W1101 11:32:24.635169 3018443 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "storage.k8s.io/v1, Resource=storageclasses", GroupVersionKind: "storage.k8s.io/v1, Kind=StorageClass"
	Name: "standard", Namespace: ""
	from server for: "/etc/kubernetes/addons/storageclass.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get storageclasses.storage.k8s.io standard)
	]
	I1101 11:32:24.635423 3018443 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (6m0.645864249s)
	I1101 11:32:24.635467 3018443 api_server.go:52] waiting for apiserver process to appear ...
	I1101 11:32:24.638459 3018443 out.go:203] 
	I1101 11:32:24.638459 3018443 out.go:179] * Enabled addons: 
	I1101 11:32:24.641362 3018443 addons.go:515] duration metric: took 6m1.131881281s for enable addons: enabled=[]
	W1101 11:32:24.641426 3018443 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1101 11:32:24.641454 3018443 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1101 11:32:24.641463 3018443 out.go:285] * Related issues:
	W1101 11:32:24.641476 3018443 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1101 11:32:24.641498 3018443 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1101 11:32:24.646470 3018443 out.go:203] 
	I1101 11:32:24.614256 3041369 out.go:252]   - Configuring RBAC rules ...
	I1101 11:32:24.614387 3041369 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 11:32:24.619594 3041369 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 11:32:24.633741 3041369 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 11:32:24.640690 3041369 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 11:32:24.646146 3041369 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 11:32:24.673509 3041369 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 11:32:24.955242 3041369 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 11:32:25.403364 3041369 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 11:32:25.952310 3041369 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 11:32:25.953283 3041369 kubeadm.go:319] 
	I1101 11:32:25.953354 3041369 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 11:32:25.953359 3041369 kubeadm.go:319] 
	I1101 11:32:25.953439 3041369 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 11:32:25.953443 3041369 kubeadm.go:319] 
	I1101 11:32:25.953487 3041369 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 11:32:25.953546 3041369 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 11:32:25.953599 3041369 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 11:32:25.953603 3041369 kubeadm.go:319] 
	I1101 11:32:25.953655 3041369 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 11:32:25.953659 3041369 kubeadm.go:319] 
	I1101 11:32:25.953705 3041369 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 11:32:25.953709 3041369 kubeadm.go:319] 
	I1101 11:32:25.953760 3041369 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 11:32:25.953859 3041369 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 11:32:25.953942 3041369 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 11:32:25.953956 3041369 kubeadm.go:319] 
	I1101 11:32:25.954061 3041369 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 11:32:25.954150 3041369 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 11:32:25.954156 3041369 kubeadm.go:319] 
	I1101 11:32:25.954254 3041369 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token wu4dwd.yqgvc7pr0ub9n1b7 \
	I1101 11:32:25.954365 3041369 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a5adf16420337ba808498f802bdf7612eba2032e501aa49cc97ee053fc354fce \
	I1101 11:32:25.954399 3041369 kubeadm.go:319] 	--control-plane 
	I1101 11:32:25.954403 3041369 kubeadm.go:319] 
	I1101 11:32:25.954501 3041369 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 11:32:25.954505 3041369 kubeadm.go:319] 
	I1101 11:32:25.954596 3041369 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token wu4dwd.yqgvc7pr0ub9n1b7 \
	I1101 11:32:25.954724 3041369 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a5adf16420337ba808498f802bdf7612eba2032e501aa49cc97ee053fc354fce 
	I1101 11:32:25.958357 3041369 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1101 11:32:25.958588 3041369 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1101 11:32:25.958697 3041369 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 11:32:25.958711 3041369 cni.go:84] Creating CNI manager for ""
	I1101 11:32:25.958718 3041369 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1101 11:32:25.963706 3041369 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1101 11:32:25.966609 3041369 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 11:32:25.970563 3041369 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1101 11:32:25.970574 3041369 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1101 11:32:25.985561 3041369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 11:32:26.281342 3041369 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 11:32:26.281472 3041369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 11:32:26.281554 3041369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes cert-expiration-409334 minikube.k8s.io/updated_at=2025_11_01T11_32_26_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845 minikube.k8s.io/name=cert-expiration-409334 minikube.k8s.io/primary=true
	I1101 11:32:26.434662 3041369 kubeadm.go:1114] duration metric: took 153.236519ms to wait for elevateKubeSystemPrivileges
	I1101 11:32:26.434688 3041369 ops.go:34] apiserver oom_adj: -16
	I1101 11:32:26.434695 3041369 kubeadm.go:403] duration metric: took 19.375930536s to StartCluster
	I1101 11:32:26.434710 3041369 settings.go:142] acquiring lock: {Name:mk5646e8bf39bd11e3ceea772a0783343ff08308 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:32:26.434766 3041369 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21830-2847530/kubeconfig
	I1101 11:32:26.435761 3041369 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-2847530/kubeconfig: {Name:mk30d6c204d7a4b60522139b4b98bc7edaea9653 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:32:26.436030 3041369 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1101 11:32:26.436105 3041369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 11:32:26.436341 3041369 config.go:182] Loaded profile config "cert-expiration-409334": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1101 11:32:26.436375 3041369 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 11:32:26.436432 3041369 addons.go:70] Setting storage-provisioner=true in profile "cert-expiration-409334"
	I1101 11:32:26.436446 3041369 addons.go:239] Setting addon storage-provisioner=true in "cert-expiration-409334"
	I1101 11:32:26.436467 3041369 host.go:66] Checking if "cert-expiration-409334" exists ...
	I1101 11:32:26.436946 3041369 cli_runner.go:164] Run: docker container inspect cert-expiration-409334 --format={{.State.Status}}
	I1101 11:32:26.437328 3041369 addons.go:70] Setting default-storageclass=true in profile "cert-expiration-409334"
	I1101 11:32:26.437342 3041369 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "cert-expiration-409334"
	I1101 11:32:26.437589 3041369 cli_runner.go:164] Run: docker container inspect cert-expiration-409334 --format={{.State.Status}}
	I1101 11:32:26.439399 3041369 out.go:179] * Verifying Kubernetes components...
	I1101 11:32:26.442470 3041369 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 11:32:26.474319 3041369 addons.go:239] Setting addon default-storageclass=true in "cert-expiration-409334"
	I1101 11:32:26.474346 3041369 host.go:66] Checking if "cert-expiration-409334" exists ...
	I1101 11:32:26.474799 3041369 cli_runner.go:164] Run: docker container inspect cert-expiration-409334 --format={{.State.Status}}
	I1101 11:32:26.479005 3041369 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 11:32:26.481949 3041369 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 11:32:26.481960 3041369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 11:32:26.482026 3041369 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-409334
	I1101 11:32:26.528241 3041369 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37066 SSHKeyPath:/home/jenkins/minikube-integration/21830-2847530/.minikube/machines/cert-expiration-409334/id_rsa Username:docker}
	I1101 11:32:26.539233 3041369 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 11:32:26.539246 3041369 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 11:32:26.539306 3041369 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-409334
	I1101 11:32:26.567228 3041369 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37066 SSHKeyPath:/home/jenkins/minikube-integration/21830-2847530/.minikube/machines/cert-expiration-409334/id_rsa Username:docker}
	I1101 11:32:26.767662 3041369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 11:32:26.796097 3041369 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 11:32:26.796124 3041369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 11:32:26.797966 3041369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 11:32:27.181930 3041369 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1101 11:32:27.183822 3041369 api_server.go:52] waiting for apiserver process to appear ...
	I1101 11:32:27.183973 3041369 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:32:27.434866 3041369 api_server.go:72] duration metric: took 998.734728ms to wait for apiserver process to appear ...
	I1101 11:32:27.434889 3041369 api_server.go:88] waiting for apiserver healthz status ...
	I1101 11:32:27.434905 3041369 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 11:32:27.437744 3041369 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1101 11:32:27.440702 3041369 addons.go:515] duration metric: took 1.004304407s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1101 11:32:27.446100 3041369 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1101 11:32:27.447105 3041369 api_server.go:141] control plane version: v1.34.1
	I1101 11:32:27.447119 3041369 api_server.go:131] duration metric: took 12.22479ms to wait for apiserver health ...
	I1101 11:32:27.447126 3041369 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 11:32:27.450808 3041369 system_pods.go:59] 5 kube-system pods found
	I1101 11:32:27.450828 3041369 system_pods.go:61] "etcd-cert-expiration-409334" [ee9d6a7c-0dba-4c5b-b73b-b1ba00b0da8e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 11:32:27.450835 3041369 system_pods.go:61] "kube-apiserver-cert-expiration-409334" [55dd40c7-9a1d-4942-8213-8b773b4ca516] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 11:32:27.450845 3041369 system_pods.go:61] "kube-controller-manager-cert-expiration-409334" [b391eea2-8cab-4f76-822e-fcfe41b55f54] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 11:32:27.450852 3041369 system_pods.go:61] "kube-scheduler-cert-expiration-409334" [ed0a7176-1803-42a2-8e31-67a6c67a657b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 11:32:27.450856 3041369 system_pods.go:61] "storage-provisioner" [3a3f0e51-479f-4b14-8c76-1ae6597b4d53] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1101 11:32:27.450860 3041369 system_pods.go:74] duration metric: took 3.730732ms to wait for pod list to return data ...
	I1101 11:32:27.450871 3041369 kubeadm.go:587] duration metric: took 1.014821405s to wait for: map[apiserver:true system_pods:true]
	I1101 11:32:27.450882 3041369 node_conditions.go:102] verifying NodePressure condition ...
	I1101 11:32:27.455919 3041369 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 11:32:27.455937 3041369 node_conditions.go:123] node cpu capacity is 2
	I1101 11:32:27.455948 3041369 node_conditions.go:105] duration metric: took 5.062641ms to run NodePressure ...
	I1101 11:32:27.455958 3041369 start.go:242] waiting for startup goroutines ...
	I1101 11:32:27.685484 3041369 kapi.go:214] "coredns" deployment in "kube-system" namespace and "cert-expiration-409334" context rescaled to 1 replicas
	I1101 11:32:27.685505 3041369 start.go:247] waiting for cluster config update ...
	I1101 11:32:27.685516 3041369 start.go:256] writing updated cluster config ...
	I1101 11:32:27.685811 3041369 ssh_runner.go:195] Run: rm -f paused
	I1101 11:32:27.741795 3041369 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1101 11:32:27.744955 3041369 out.go:179] * Done! kubectl is now configured to use "cert-expiration-409334" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                 NAMESPACE
	de4333a8a45db       43911e833d64d       23 seconds ago      Exited              kube-apiserver            7                   42288a20fea4d       kube-apiserver-kubernetes-upgrade-847244            kube-system
	648f76ff45db1       7eb2c6ff0c5a7       25 seconds ago      Exited              kube-controller-manager   7                   6db5f9097ebb7       kube-controller-manager-kubernetes-upgrade-847244   kube-system
	9b17f0903b8f0       b5f57ec6b9867       30 seconds ago      Exited              kube-scheduler            7                   4de1031a74db1       kube-scheduler-kubernetes-upgrade-847244            kube-system
	b4011f9c9dbbc       a1894772a478e       5 minutes ago       Running             etcd                      0                   b06fa7f25851a       etcd-kubernetes-upgrade-847244                      kube-system
	b51de19be84d9       05baa95f5142d       6 minutes ago       Running             kube-proxy                0                   ad421550d1031       kube-proxy-7dl6j                                    kube-system
	1c6dea165b594       b1a8c6f707935       6 minutes ago       Running             kindnet-cni               0                   c617fc5e0cae8       kindnet-w8887                                       kube-system
	
	
	==> containerd <==
	Nov 01 11:32:11 kubernetes-upgrade-847244 containerd[2055]: time="2025-11-01T11:32:11.533733801Z" level=info msg="shim disconnected" id=9b17f0903b8f0db2b1cfc59f68751c825b52db1b5797c7561030f52aeac7a53f namespace=k8s.io
	Nov 01 11:32:11 kubernetes-upgrade-847244 containerd[2055]: time="2025-11-01T11:32:11.533774850Z" level=warning msg="cleaning up after shim disconnected" id=9b17f0903b8f0db2b1cfc59f68751c825b52db1b5797c7561030f52aeac7a53f namespace=k8s.io
	Nov 01 11:32:11 kubernetes-upgrade-847244 containerd[2055]: time="2025-11-01T11:32:11.533811912Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Nov 01 11:32:12 kubernetes-upgrade-847244 containerd[2055]: time="2025-11-01T11:32:12.421537461Z" level=info msg="RemoveContainer for \"e3b99cb0f13ca52bc43a2b5658ce20a19e6620ce1a1e0b6e895f5995ce79f5c1\""
	Nov 01 11:32:12 kubernetes-upgrade-847244 containerd[2055]: time="2025-11-01T11:32:12.427601008Z" level=info msg="RemoveContainer for \"e3b99cb0f13ca52bc43a2b5658ce20a19e6620ce1a1e0b6e895f5995ce79f5c1\" returns successfully"
	Nov 01 11:32:15 kubernetes-upgrade-847244 containerd[2055]: time="2025-11-01T11:32:15.260872022Z" level=info msg="CreateContainer within sandbox \"6db5f9097ebb7d27d40391fed4851a7d311afae516731addc773ba2401a46cd5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:7,}"
	Nov 01 11:32:15 kubernetes-upgrade-847244 containerd[2055]: time="2025-11-01T11:32:15.296664512Z" level=info msg="CreateContainer within sandbox \"6db5f9097ebb7d27d40391fed4851a7d311afae516731addc773ba2401a46cd5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:7,} returns container id \"648f76ff45db1b0f879df637ed5c02394ecddc3c457b8dd71987e9321040f29c\""
	Nov 01 11:32:15 kubernetes-upgrade-847244 containerd[2055]: time="2025-11-01T11:32:15.297421979Z" level=info msg="StartContainer for \"648f76ff45db1b0f879df637ed5c02394ecddc3c457b8dd71987e9321040f29c\""
	Nov 01 11:32:15 kubernetes-upgrade-847244 containerd[2055]: time="2025-11-01T11:32:15.392205309Z" level=info msg="StartContainer for \"648f76ff45db1b0f879df637ed5c02394ecddc3c457b8dd71987e9321040f29c\" returns successfully"
	Nov 01 11:32:16 kubernetes-upgrade-847244 containerd[2055]: time="2025-11-01T11:32:16.743945047Z" level=info msg="received exit event container_id:\"648f76ff45db1b0f879df637ed5c02394ecddc3c457b8dd71987e9321040f29c\" id:\"648f76ff45db1b0f879df637ed5c02394ecddc3c457b8dd71987e9321040f29c\" pid:4569 exit_status:1 exited_at:{seconds:1761996736 nanos:743599005}"
	Nov 01 11:32:16 kubernetes-upgrade-847244 containerd[2055]: time="2025-11-01T11:32:16.774777343Z" level=info msg="shim disconnected" id=648f76ff45db1b0f879df637ed5c02394ecddc3c457b8dd71987e9321040f29c namespace=k8s.io
	Nov 01 11:32:16 kubernetes-upgrade-847244 containerd[2055]: time="2025-11-01T11:32:16.775033089Z" level=warning msg="cleaning up after shim disconnected" id=648f76ff45db1b0f879df637ed5c02394ecddc3c457b8dd71987e9321040f29c namespace=k8s.io
	Nov 01 11:32:16 kubernetes-upgrade-847244 containerd[2055]: time="2025-11-01T11:32:16.775180761Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Nov 01 11:32:17 kubernetes-upgrade-847244 containerd[2055]: time="2025-11-01T11:32:17.255044488Z" level=info msg="CreateContainer within sandbox \"42288a20fea4df545327e51074d1e562cb52856ea92db59e567c0170f72ab086\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:7,}"
	Nov 01 11:32:17 kubernetes-upgrade-847244 containerd[2055]: time="2025-11-01T11:32:17.288865473Z" level=info msg="CreateContainer within sandbox \"42288a20fea4df545327e51074d1e562cb52856ea92db59e567c0170f72ab086\" for &ContainerMetadata{Name:kube-apiserver,Attempt:7,} returns container id \"de4333a8a45dbc03a427f80de8999a807aeda4b082b581351d87e52abe7b6578\""
	Nov 01 11:32:17 kubernetes-upgrade-847244 containerd[2055]: time="2025-11-01T11:32:17.289730417Z" level=info msg="StartContainer for \"de4333a8a45dbc03a427f80de8999a807aeda4b082b581351d87e52abe7b6578\""
	Nov 01 11:32:17 kubernetes-upgrade-847244 containerd[2055]: time="2025-11-01T11:32:17.384759409Z" level=info msg="StartContainer for \"de4333a8a45dbc03a427f80de8999a807aeda4b082b581351d87e52abe7b6578\" returns successfully"
	Nov 01 11:32:17 kubernetes-upgrade-847244 containerd[2055]: time="2025-11-01T11:32:17.463531063Z" level=info msg="RemoveContainer for \"73895c1835a03b409f3d617f6e292755637d494ad6dd61d7e51b306ae54bfc99\""
	Nov 01 11:32:17 kubernetes-upgrade-847244 containerd[2055]: time="2025-11-01T11:32:17.470253176Z" level=info msg="RemoveContainer for \"73895c1835a03b409f3d617f6e292755637d494ad6dd61d7e51b306ae54bfc99\" returns successfully"
	Nov 01 11:32:17 kubernetes-upgrade-847244 containerd[2055]: time="2025-11-01T11:32:17.471042011Z" level=info msg="received exit event container_id:\"de4333a8a45dbc03a427f80de8999a807aeda4b082b581351d87e52abe7b6578\" id:\"de4333a8a45dbc03a427f80de8999a807aeda4b082b581351d87e52abe7b6578\" pid:4630 exit_status:1 exited_at:{seconds:1761996737 nanos:468517570}"
	Nov 01 11:32:17 kubernetes-upgrade-847244 containerd[2055]: time="2025-11-01T11:32:17.497124527Z" level=info msg="shim disconnected" id=de4333a8a45dbc03a427f80de8999a807aeda4b082b581351d87e52abe7b6578 namespace=k8s.io
	Nov 01 11:32:17 kubernetes-upgrade-847244 containerd[2055]: time="2025-11-01T11:32:17.497172222Z" level=warning msg="cleaning up after shim disconnected" id=de4333a8a45dbc03a427f80de8999a807aeda4b082b581351d87e52abe7b6578 namespace=k8s.io
	Nov 01 11:32:17 kubernetes-upgrade-847244 containerd[2055]: time="2025-11-01T11:32:17.497211081Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Nov 01 11:32:18 kubernetes-upgrade-847244 containerd[2055]: time="2025-11-01T11:32:18.475795863Z" level=info msg="RemoveContainer for \"f4c7809f97dd38ce62c750323cfb5d5967d4375fb27fef33ac272074cf799b1c\""
	Nov 01 11:32:18 kubernetes-upgrade-847244 containerd[2055]: time="2025-11-01T11:32:18.482009642Z" level=info msg="RemoveContainer for \"f4c7809f97dd38ce62c750323cfb5d5967d4375fb27fef33ac272074cf799b1c\" returns successfully"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	
	
	==> dmesg <==
	[Nov 1 10:42] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [b4011f9c9dbbc92872c73d10b73d0a36e792e38e09132b50ac40e74d85fe37bf] <==
	{"level":"info","ts":"2025-11-01T11:27:03.787291Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 3"}
	{"level":"info","ts":"2025-11-01T11:27:03.787349Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 3"}
	{"level":"info","ts":"2025-11-01T11:27:03.787404Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-11-01T11:27:03.787423Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-11-01T11:27:03.787440Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 4"}
	{"level":"info","ts":"2025-11-01T11:27:03.788456Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 4"}
	{"level":"info","ts":"2025-11-01T11:27:03.788493Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-11-01T11:27:03.788521Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 4"}
	{"level":"info","ts":"2025-11-01T11:27:03.788550Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 4"}
	{"level":"info","ts":"2025-11-01T11:27:03.789714Z","caller":"etcdserver/server.go:2409","msg":"updating cluster version using v3 API","from":"3.5","to":"3.6"}
	{"level":"info","ts":"2025-11-01T11:27:03.790723Z","caller":"etcdserver/server.go:1804","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:kubernetes-upgrade-847244 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-01T11:27:03.790943Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-01T11:27:03.791101Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-01T11:27:03.791232Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-01T11:27:03.791262Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-01T11:27:03.791548Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","from":"3.5","to":"3.6"}
	{"level":"info","ts":"2025-11-01T11:27:03.791661Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2025-11-01T11:27:03.791684Z","caller":"etcdserver/server.go:2424","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2025-11-01T11:27:03.791747Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2025-11-01T11:27:03.791802Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2025-11-01T11:27:03.792208Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"warn","ts":"2025-11-01T11:27:03.792490Z","caller":"v3rpc/grpc.go:52","msg":"etcdserver: failed to register grpc metrics","error":"duplicate metrics collector registration attempted"}
	{"level":"info","ts":"2025-11-01T11:27:03.792568Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-11-01T11:27:03.797137Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-01T11:27:03.801995Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	
	
	==> kernel <==
	 11:33:41 up 20:16,  0 user,  load average: 0.87, 1.77, 2.15
	Linux kubernetes-upgrade-847244 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1c6dea165b5943127ff676d80c00884f9b27b3b977dcf9b7d51d21d2911f8f22] <==
	I1101 11:31:37.468596       1 main.go:301] handling current node
	I1101 11:31:47.470773       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 11:31:47.470810       1 main.go:301] handling current node
	I1101 11:31:57.467924       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 11:31:57.467959       1 main.go:301] handling current node
	I1101 11:32:07.466593       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 11:32:07.466629       1 main.go:301] handling current node
	I1101 11:32:17.471295       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 11:32:17.471324       1 main.go:301] handling current node
	I1101 11:32:27.468551       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 11:32:27.468579       1 main.go:301] handling current node
	I1101 11:32:37.467935       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 11:32:37.467969       1 main.go:301] handling current node
	I1101 11:32:47.470297       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 11:32:47.470332       1 main.go:301] handling current node
	I1101 11:32:57.469379       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 11:32:57.469414       1 main.go:301] handling current node
	I1101 11:33:07.468107       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 11:33:07.468142       1 main.go:301] handling current node
	I1101 11:33:17.470266       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 11:33:17.470299       1 main.go:301] handling current node
	I1101 11:33:27.470984       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 11:33:27.471019       1 main.go:301] handling current node
	I1101 11:33:37.469872       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 11:33:37.469908       1 main.go:301] handling current node
	
	
	==> kube-apiserver [de4333a8a45dbc03a427f80de8999a807aeda4b082b581351d87e52abe7b6578] <==
	I1101 11:32:17.430349       1 options.go:263] external host was not specified, using 192.168.76.2
	I1101 11:32:17.436177       1 server.go:150] Version: v1.34.1
	I1101 11:32:17.436392       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E1101 11:32:17.436808       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8443: listen tcp 0.0.0.0:8443: bind: address already in use"
	
	
	==> kube-controller-manager [648f76ff45db1b0f879df637ed5c02394ecddc3c457b8dd71987e9321040f29c] <==
	I1101 11:32:16.735487       1 serving.go:386] Generated self-signed cert in-memory
	E1101 11:32:16.738546       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 127.0.0.1:10257: listen tcp 127.0.0.1:10257: bind: address already in use"
	
	
	==> kube-proxy [b51de19be84d9ffa7535b3f7630b35f424bce7236fa03a2d9bd963d2dbf75db4] <==
	I1101 11:26:32.040555       1 server_linux.go:53] "Using iptables proxy"
	I1101 11:26:32.141469       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 11:26:32.242568       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 11:26:32.242608       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1101 11:26:32.242704       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 11:26:32.269328       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 11:26:32.269532       1 server_linux.go:132] "Using iptables Proxier"
	I1101 11:26:32.274839       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 11:26:32.275288       1 server.go:527] "Version info" version="v1.34.1"
	I1101 11:26:32.275554       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 11:26:32.277194       1 config.go:200] "Starting service config controller"
	I1101 11:26:32.277351       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 11:26:32.277443       1 config.go:106] "Starting endpoint slice config controller"
	I1101 11:26:32.277526       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 11:26:32.277610       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 11:26:32.277677       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 11:26:32.280347       1 config.go:309] "Starting node config controller"
	I1101 11:26:32.280491       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 11:26:32.280580       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 11:26:32.377972       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 11:26:32.378186       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 11:26:32.378214       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	E1101 11:27:06.282405       1 event_broadcaster.go:270] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{kubernetes-upgrade-847244.1873de62212ea2d4  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},EventTime:2025-11-01 11:26:32.276860733 +0000 UTC m=+0.281530253,Series:nil,ReportingController:kube-proxy,ReportingInstance:kube-proxy-kubernetes-upgrade-847244,Action:StartKubeProxy,Reason:Starting,Regarding:{Node  kubernetes-upgrade-847244 kubernetes-upgrade-847244   },Related:nil,Note:,Type:Normal,DeprecatedSource:{ },DeprecatedFirstTimestamp:0001-01-01 00:00:00 +0000 UTC,DeprecatedLastTimestamp:0001-01-01 00:00:00 +0000 UTC,DeprecatedCount:0,}"
	
	
	==> kube-scheduler [9b17f0903b8f0db2b1cfc59f68751c825b52db1b5797c7561030f52aeac7a53f] <==
	I1101 11:32:11.477343       1 serving.go:386] Generated self-signed cert in-memory
	E1101 11:32:11.478626       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 127.0.0.1:10259: listen tcp 127.0.0.1:10259: bind: address already in use"
	
	
	==> kubelet <==
	Nov 01 11:33:08 kubernetes-upgrade-847244 kubelet[1142]: E1101 11:33:08.251987    1142 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-scheduler pod=kube-scheduler-kubernetes-upgrade-847244_kube-system(a578269af95b9cb5f363557cea9f3e5d)\"" pod="kube-system/kube-scheduler-kubernetes-upgrade-847244" podUID="a578269af95b9cb5f363557cea9f3e5d"
	Nov 01 11:33:10 kubernetes-upgrade-847244 kubelet[1142]: I1101 11:33:10.251509    1142 scope.go:117] "RemoveContainer" containerID="648f76ff45db1b0f879df637ed5c02394ecddc3c457b8dd71987e9321040f29c"
	Nov 01 11:33:10 kubernetes-upgrade-847244 kubelet[1142]: E1101 11:33:10.251756    1142 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-kubernetes-upgrade-847244_kube-system(bf0b96b9f841986039f090c08bb885fc)\"" pod="kube-system/kube-controller-manager-kubernetes-upgrade-847244" podUID="bf0b96b9f841986039f090c08bb885fc"
	Nov 01 11:33:11 kubernetes-upgrade-847244 kubelet[1142]: E1101 11:33:11.237008    1142 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-847244?timeout=10s\": context deadline exceeded" interval="7s"
	Nov 01 11:33:15 kubernetes-upgrade-847244 kubelet[1142]: I1101 11:33:15.251925    1142 scope.go:117] "RemoveContainer" containerID="de4333a8a45dbc03a427f80de8999a807aeda4b082b581351d87e52abe7b6578"
	Nov 01 11:33:15 kubernetes-upgrade-847244 kubelet[1142]: E1101 11:33:15.252547    1142 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-apiserver pod=kube-apiserver-kubernetes-upgrade-847244_kube-system(bfceec379067d2bd6b2a14b09422f313)\"" pod="kube-system/kube-apiserver-kubernetes-upgrade-847244" podUID="bfceec379067d2bd6b2a14b09422f313"
	Nov 01 11:33:18 kubernetes-upgrade-847244 kubelet[1142]: E1101 11:33:18.062164    1142 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"kubernetes-upgrade-847244\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-847244?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Nov 01 11:33:18 kubernetes-upgrade-847244 kubelet[1142]: E1101 11:33:18.062707    1142 kubelet_node_status.go:473] "Unable to update node status" err="update node status exceeds retry count"
	Nov 01 11:33:18 kubernetes-upgrade-847244 kubelet[1142]: E1101 11:33:18.440515    1142 status_manager.go:1018] "Failed to get status for pod" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-scheduler-kubernetes-upgrade-847244)" podUID="a578269af95b9cb5f363557cea9f3e5d" pod="kube-system/kube-scheduler-kubernetes-upgrade-847244"
	Nov 01 11:33:19 kubernetes-upgrade-847244 kubelet[1142]: I1101 11:33:19.251556    1142 scope.go:117] "RemoveContainer" containerID="9b17f0903b8f0db2b1cfc59f68751c825b52db1b5797c7561030f52aeac7a53f"
	Nov 01 11:33:19 kubernetes-upgrade-847244 kubelet[1142]: E1101 11:33:19.251736    1142 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-scheduler pod=kube-scheduler-kubernetes-upgrade-847244_kube-system(a578269af95b9cb5f363557cea9f3e5d)\"" pod="kube-system/kube-scheduler-kubernetes-upgrade-847244" podUID="a578269af95b9cb5f363557cea9f3e5d"
	Nov 01 11:33:21 kubernetes-upgrade-847244 kubelet[1142]: I1101 11:33:21.251953    1142 kubelet.go:3202] "Trying to delete pod" pod="kube-system/etcd-kubernetes-upgrade-847244" podUID="2cbac73c-24c9-45c9-a6b4-8aead9ef3133"
	Nov 01 11:33:25 kubernetes-upgrade-847244 kubelet[1142]: I1101 11:33:25.252046    1142 scope.go:117] "RemoveContainer" containerID="648f76ff45db1b0f879df637ed5c02394ecddc3c457b8dd71987e9321040f29c"
	Nov 01 11:33:25 kubernetes-upgrade-847244 kubelet[1142]: E1101 11:33:25.253507    1142 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-kubernetes-upgrade-847244_kube-system(bf0b96b9f841986039f090c08bb885fc)\"" pod="kube-system/kube-controller-manager-kubernetes-upgrade-847244" podUID="bf0b96b9f841986039f090c08bb885fc"
	Nov 01 11:33:27 kubernetes-upgrade-847244 kubelet[1142]: I1101 11:33:27.252538    1142 scope.go:117] "RemoveContainer" containerID="de4333a8a45dbc03a427f80de8999a807aeda4b082b581351d87e52abe7b6578"
	Nov 01 11:33:27 kubernetes-upgrade-847244 kubelet[1142]: E1101 11:33:27.253175    1142 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-apiserver pod=kube-apiserver-kubernetes-upgrade-847244_kube-system(bfceec379067d2bd6b2a14b09422f313)\"" pod="kube-system/kube-apiserver-kubernetes-upgrade-847244" podUID="bfceec379067d2bd6b2a14b09422f313"
	Nov 01 11:33:28 kubernetes-upgrade-847244 kubelet[1142]: E1101 11:33:28.239046    1142 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-847244?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
	Nov 01 11:33:30 kubernetes-upgrade-847244 kubelet[1142]: I1101 11:33:30.251459    1142 scope.go:117] "RemoveContainer" containerID="9b17f0903b8f0db2b1cfc59f68751c825b52db1b5797c7561030f52aeac7a53f"
	Nov 01 11:33:30 kubernetes-upgrade-847244 kubelet[1142]: E1101 11:33:30.251675    1142 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-scheduler pod=kube-scheduler-kubernetes-upgrade-847244_kube-system(a578269af95b9cb5f363557cea9f3e5d)\"" pod="kube-system/kube-scheduler-kubernetes-upgrade-847244" podUID="a578269af95b9cb5f363557cea9f3e5d"
	Nov 01 11:33:38 kubernetes-upgrade-847244 kubelet[1142]: I1101 11:33:38.252340    1142 scope.go:117] "RemoveContainer" containerID="de4333a8a45dbc03a427f80de8999a807aeda4b082b581351d87e52abe7b6578"
	Nov 01 11:33:38 kubernetes-upgrade-847244 kubelet[1142]: E1101 11:33:38.252535    1142 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-apiserver pod=kube-apiserver-kubernetes-upgrade-847244_kube-system(bfceec379067d2bd6b2a14b09422f313)\"" pod="kube-system/kube-apiserver-kubernetes-upgrade-847244" podUID="bfceec379067d2bd6b2a14b09422f313"
	Nov 01 11:33:38 kubernetes-upgrade-847244 kubelet[1142]: E1101 11:33:38.337472    1142 kubelet_node_status.go:486] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-01T11:33:28Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-01T11:33:28Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-01T11:33:28Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-01T11:33:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-01T11:33:28Z\\\",\\\"message\\\":\\\"kubelet is posting ready status\\\",\\\"reason\\\":\\\"KubeletReady\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.k8s.io/etcd@sha256:e36c081683425b5b3
bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\\\",\\\"registry.k8s.io/etcd:3.6.4-0\\\"],\\\"sizeBytes\\\":98207481},{\\\"names\\\":[\\\"docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a\\\",\\\"docker.io/kindest/kindnetd:v20250512-df8de77b\\\"],\\\"sizeBytes\\\":40636774},{\\\"names\\\":[\\\"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\\\",\\\"registry.k8s.io/kube-apiserver:v1.34.1\\\"],\\\"sizeBytes\\\":24571109},{\\\"names\\\":[\\\"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\\\",\\\"registry.k8s.io/kube-proxy:v1.34.1\\\"],\\\"sizeBytes\\\":22788047},{\\\"names\\\":[\\\"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\\\",\\\"registry.k8s.io/kube-controller-manager:v1.34.1\\\"],\\\"sizeBytes\\\":20720058},{\\\"names\\\":[\\\"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233
976353a66e4d77eb5d0530e9118e94b7d46fb3500\\\",\\\"registry.k8s.io/kube-scheduler:v1.34.1\\\"],\\\"sizeBytes\\\":15779817},{\\\"names\\\":[\\\"gcr.io/k8s-minikube/storage-provisioner:v5\\\"],\\\"sizeBytes\\\":8032639},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\\\",\\\"registry.k8s.io/pause:3.10.1\\\"],\\\"sizeBytes\\\":267939}]}}\" for node \"kubernetes-upgrade-847244\": Patch \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-847244/status?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
	Nov 01 11:33:40 kubernetes-upgrade-847244 kubelet[1142]: I1101 11:33:40.252240    1142 scope.go:117] "RemoveContainer" containerID="648f76ff45db1b0f879df637ed5c02394ecddc3c457b8dd71987e9321040f29c"
	Nov 01 11:33:40 kubernetes-upgrade-847244 kubelet[1142]: E1101 11:33:40.252420    1142 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-kubernetes-upgrade-847244_kube-system(bf0b96b9f841986039f090c08bb885fc)\"" pod="kube-system/kube-controller-manager-kubernetes-upgrade-847244" podUID="bf0b96b9f841986039f090c08bb885fc"
	Nov 01 11:33:41 kubernetes-upgrade-847244 kubelet[1142]: E1101 11:33:41.470878    1142 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{kube-controller-manager-kubernetes-upgrade-847244.1873de5b781dc548  kube-system   437 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-controller-manager-kubernetes-upgrade-847244,UID:bf0b96b9f841986039f090c08bb885fc,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:kubernetes-upgrade-847244,},FirstTimestamp:2025-11-01 11:26:03 +0000 UTC,LastTimestamp:2025-11-01 11:26:29.255963529 +0000 UTC m=+32.164265618,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:k
ubelet,ReportingInstance:kubernetes-upgrade-847244,}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-847244 -n kubernetes-upgrade-847244
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-847244 -n kubernetes-upgrade-847244: exit status 2 (15.828767784s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "kubernetes-upgrade-847244" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-847244" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-847244
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-847244: (2.262022605s)
--- FAIL: TestKubernetesUpgrade (539.09s)

                                                
                                    

Test pass (300/332)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 17.14
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.1
9 TestDownloadOnly/v1.28.0/DeleteAll 0.21
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.34.1/json-events 5.15
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.09
18 TestDownloadOnly/v1.34.1/DeleteAll 0.21
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.59
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 172.28
29 TestAddons/serial/Volcano 40.54
31 TestAddons/serial/GCPAuth/Namespaces 0.19
32 TestAddons/serial/GCPAuth/FakeCredentials 9.84
35 TestAddons/parallel/Registry 16.36
36 TestAddons/parallel/RegistryCreds 0.81
37 TestAddons/parallel/Ingress 19.71
38 TestAddons/parallel/InspektorGadget 5.34
39 TestAddons/parallel/MetricsServer 5.88
41 TestAddons/parallel/CSI 52.8
42 TestAddons/parallel/Headlamp 17.98
43 TestAddons/parallel/CloudSpanner 6.59
44 TestAddons/parallel/LocalPath 52.42
45 TestAddons/parallel/NvidiaDevicePlugin 5.76
46 TestAddons/parallel/Yakd 11.85
48 TestAddons/StoppedEnableDisable 12.48
49 TestCertOptions 39.3
50 TestCertExpiration 223.39
52 TestForceSystemdFlag 37.42
53 TestForceSystemdEnv 40.23
54 TestDockerEnvContainerd 44.04
58 TestErrorSpam/setup 34.12
59 TestErrorSpam/start 0.8
60 TestErrorSpam/status 1.12
61 TestErrorSpam/pause 1.76
62 TestErrorSpam/unpause 1.79
63 TestErrorSpam/stop 12.28
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 51.13
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 7.5
70 TestFunctional/serial/KubeContext 0.06
71 TestFunctional/serial/KubectlGetPods 0.09
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.54
75 TestFunctional/serial/CacheCmd/cache/add_local 1.31
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
79 TestFunctional/serial/CacheCmd/cache/cache_reload 2.1
80 TestFunctional/serial/CacheCmd/cache/delete 0.11
81 TestFunctional/serial/MinikubeKubectlCmd 0.14
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
83 TestFunctional/serial/ExtraConfig 44.47
84 TestFunctional/serial/ComponentHealth 0.11
85 TestFunctional/serial/LogsCmd 1.41
86 TestFunctional/serial/LogsFileCmd 1.47
87 TestFunctional/serial/InvalidService 4.87
89 TestFunctional/parallel/ConfigCmd 0.48
91 TestFunctional/parallel/DryRun 0.48
92 TestFunctional/parallel/InternationalLanguage 0.19
93 TestFunctional/parallel/StatusCmd 1.07
97 TestFunctional/parallel/ServiceCmdConnect 9.74
98 TestFunctional/parallel/AddonsCmd 0.2
99 TestFunctional/parallel/PersistentVolumeClaim 24
101 TestFunctional/parallel/SSHCmd 0.69
102 TestFunctional/parallel/CpCmd 2.44
104 TestFunctional/parallel/FileSync 0.27
105 TestFunctional/parallel/CertSync 1.71
109 TestFunctional/parallel/NodeLabels 0.08
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.55
113 TestFunctional/parallel/License 0.31
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.66
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.42
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.1
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
125 TestFunctional/parallel/ServiceCmd/DeployApp 7.25
126 TestFunctional/parallel/ProfileCmd/profile_not_create 0.45
127 TestFunctional/parallel/ProfileCmd/profile_list 0.43
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.46
129 TestFunctional/parallel/ServiceCmd/List 0.66
130 TestFunctional/parallel/MountCmd/any-port 8.18
131 TestFunctional/parallel/ServiceCmd/JSONOutput 0.53
132 TestFunctional/parallel/ServiceCmd/HTTPS 0.44
133 TestFunctional/parallel/ServiceCmd/Format 0.5
134 TestFunctional/parallel/ServiceCmd/URL 0.43
135 TestFunctional/parallel/MountCmd/specific-port 2.15
136 TestFunctional/parallel/MountCmd/VerifyCleanup 2.1
137 TestFunctional/parallel/Version/short 0.07
138 TestFunctional/parallel/Version/components 1.17
139 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
140 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
141 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
142 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
143 TestFunctional/parallel/ImageCommands/ImageBuild 3.59
144 TestFunctional/parallel/ImageCommands/Setup 0.62
145 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.11
146 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.03
147 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.27
148 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.34
149 TestFunctional/parallel/ImageCommands/ImageRemove 0.5
150 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.63
151 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.41
152 TestFunctional/parallel/UpdateContextCmd/no_changes 0.15
153 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.17
154 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.15
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 176.31
163 TestMultiControlPlane/serial/DeployApp 6.65
164 TestMultiControlPlane/serial/PingHostFromPods 1.6
165 TestMultiControlPlane/serial/AddWorkerNode 29.01
166 TestMultiControlPlane/serial/NodeLabels 0.11
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.11
168 TestMultiControlPlane/serial/CopyFile 20.03
169 TestMultiControlPlane/serial/StopSecondaryNode 12.91
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.81
171 TestMultiControlPlane/serial/RestartSecondaryNode 14.46
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.86
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 98.31
174 TestMultiControlPlane/serial/DeleteSecondaryNode 10.68
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.81
176 TestMultiControlPlane/serial/StopCluster 36.45
177 TestMultiControlPlane/serial/RestartCluster 60.53
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.79
179 TestMultiControlPlane/serial/AddSecondaryNode 79.79
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.09
185 TestJSONOutput/start/Command 48.27
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Command 0.71
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Command 0.61
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.95
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.23
210 TestKicCustomNetwork/create_custom_network 39.6
211 TestKicCustomNetwork/use_default_bridge_network 36.15
212 TestKicExistingNetwork 36.27
213 TestKicCustomSubnet 39.09
214 TestKicStaticIP 37.09
215 TestMainNoArgs 0.05
216 TestMinikubeProfile 72.62
219 TestMountStart/serial/StartWithMountFirst 10.36
220 TestMountStart/serial/VerifyMountFirst 0.28
221 TestMountStart/serial/StartWithMountSecond 9.46
222 TestMountStart/serial/VerifyMountSecond 0.28
223 TestMountStart/serial/DeleteFirst 1.69
224 TestMountStart/serial/VerifyMountPostDelete 0.28
225 TestMountStart/serial/Stop 1.28
226 TestMountStart/serial/RestartStopped 7.88
227 TestMountStart/serial/VerifyMountPostStop 0.27
230 TestMultiNode/serial/FreshStart2Nodes 103.15
231 TestMultiNode/serial/DeployApp2Nodes 4.99
232 TestMultiNode/serial/PingHostFrom2Pods 0.97
233 TestMultiNode/serial/AddNode 26.03
234 TestMultiNode/serial/MultiNodeLabels 0.09
235 TestMultiNode/serial/ProfileList 0.72
236 TestMultiNode/serial/CopyFile 10.3
237 TestMultiNode/serial/StopNode 2.75
238 TestMultiNode/serial/StartAfterStop 7.74
239 TestMultiNode/serial/RestartKeepsNodes 83.59
240 TestMultiNode/serial/DeleteNode 5.67
241 TestMultiNode/serial/StopMultiNode 24.1
242 TestMultiNode/serial/RestartMultiNode 54.45
243 TestMultiNode/serial/ValidateNameConflict 35.99
248 TestPreload 117.47
250 TestScheduledStopUnix 112.18
253 TestInsufficientStorage 13.63
254 TestRunningBinaryUpgrade 55.96
257 TestMissingContainerUpgrade 159.89
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
260 TestNoKubernetes/serial/StartWithK8s 48.32
261 TestNoKubernetes/serial/StartWithStopK8s 10.32
262 TestNoKubernetes/serial/Start 9.24
263 TestNoKubernetes/serial/VerifyK8sNotRunning 0.27
264 TestNoKubernetes/serial/ProfileList 0.68
265 TestNoKubernetes/serial/Stop 1.29
266 TestNoKubernetes/serial/StartNoArgs 8
267 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.28
268 TestStoppedBinaryUpgrade/Setup 0.68
269 TestStoppedBinaryUpgrade/Upgrade 75.24
270 TestStoppedBinaryUpgrade/MinikubeLogs 1.37
279 TestPause/serial/Start 81.55
280 TestPause/serial/SecondStartNoReconfiguration 7.09
281 TestPause/serial/Pause 0.71
282 TestPause/serial/VerifyStatus 0.34
283 TestPause/serial/Unpause 0.63
284 TestPause/serial/PauseAgain 0.84
285 TestPause/serial/DeletePaused 2.41
286 TestPause/serial/VerifyDeletedResources 15.95
294 TestNetworkPlugins/group/false 3.56
299 TestStartStop/group/old-k8s-version/serial/FirstStart 61.69
301 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 78.71
302 TestStartStop/group/old-k8s-version/serial/DeployApp 10.53
303 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.48
304 TestStartStop/group/old-k8s-version/serial/Stop 12.29
305 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.23
306 TestStartStop/group/old-k8s-version/serial/SecondStart 57.35
307 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.34
308 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
309 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.11
310 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.13
311 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
312 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
313 TestStartStop/group/old-k8s-version/serial/Pause 2.9
314 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.25
315 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 54.87
317 TestStartStop/group/embed-certs/serial/FirstStart 88.64
318 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
319 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.13
320 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
321 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.03
323 TestStartStop/group/no-preload/serial/FirstStart 60.37
324 TestStartStop/group/embed-certs/serial/DeployApp 9.41
325 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.48
326 TestStartStop/group/embed-certs/serial/Stop 12.64
327 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.22
328 TestStartStop/group/embed-certs/serial/SecondStart 50.34
329 TestStartStop/group/no-preload/serial/DeployApp 8.43
330 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.23
331 TestStartStop/group/no-preload/serial/Stop 12.19
332 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
333 TestStartStop/group/no-preload/serial/SecondStart 50.78
334 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
335 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.17
336 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.3
337 TestStartStop/group/embed-certs/serial/Pause 4.16
339 TestStartStop/group/newest-cni/serial/FirstStart 43.3
340 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
341 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.11
342 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.32
343 TestStartStop/group/no-preload/serial/Pause 3.91
344 TestNetworkPlugins/group/auto/Start 58.31
345 TestStartStop/group/newest-cni/serial/DeployApp 0
346 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.25
347 TestStartStop/group/newest-cni/serial/Stop 3.6
348 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.3
349 TestStartStop/group/newest-cni/serial/SecondStart 24.05
350 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
351 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
352 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.31
353 TestStartStop/group/newest-cni/serial/Pause 4.45
354 TestNetworkPlugins/group/kindnet/Start 84.09
355 TestNetworkPlugins/group/auto/KubeletFlags 0.41
356 TestNetworkPlugins/group/auto/NetCatPod 10.35
357 TestNetworkPlugins/group/auto/DNS 0.22
358 TestNetworkPlugins/group/auto/Localhost 0.19
359 TestNetworkPlugins/group/auto/HairPin 0.2
360 TestNetworkPlugins/group/calico/Start 52.48
361 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
362 TestNetworkPlugins/group/kindnet/KubeletFlags 0.37
363 TestNetworkPlugins/group/kindnet/NetCatPod 11.42
364 TestNetworkPlugins/group/kindnet/DNS 0.21
365 TestNetworkPlugins/group/kindnet/Localhost 0.18
366 TestNetworkPlugins/group/kindnet/HairPin 0.23
367 TestNetworkPlugins/group/calico/ControllerPod 6.01
368 TestNetworkPlugins/group/calico/KubeletFlags 0.46
369 TestNetworkPlugins/group/calico/NetCatPod 11.37
370 TestNetworkPlugins/group/calico/DNS 0.24
371 TestNetworkPlugins/group/custom-flannel/Start 70.1
372 TestNetworkPlugins/group/calico/Localhost 0.25
373 TestNetworkPlugins/group/calico/HairPin 0.21
374 TestNetworkPlugins/group/enable-default-cni/Start 79.46
375 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.3
376 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.26
377 TestNetworkPlugins/group/custom-flannel/DNS 0.17
378 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
379 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
380 TestNetworkPlugins/group/flannel/Start 65.55
381 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.36
382 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.41
383 TestNetworkPlugins/group/enable-default-cni/DNS 0.21
384 TestNetworkPlugins/group/enable-default-cni/Localhost 0.2
385 TestNetworkPlugins/group/enable-default-cni/HairPin 0.19
386 TestNetworkPlugins/group/bridge/Start 73.49
387 TestNetworkPlugins/group/flannel/ControllerPod 6
388 TestNetworkPlugins/group/flannel/KubeletFlags 0.36
389 TestNetworkPlugins/group/flannel/NetCatPod 10.35
390 TestNetworkPlugins/group/flannel/DNS 0.19
391 TestNetworkPlugins/group/flannel/Localhost 0.17
392 TestNetworkPlugins/group/flannel/HairPin 0.19
393 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
394 TestNetworkPlugins/group/bridge/NetCatPod 9.27
395 TestNetworkPlugins/group/bridge/DNS 0.17
396 TestNetworkPlugins/group/bridge/Localhost 0.13
397 TestNetworkPlugins/group/bridge/HairPin 0.15
x
+
TestDownloadOnly/v1.28.0/json-events (17.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-166274 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-166274 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (17.142894943s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (17.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1101 10:43:01.463822 2849422 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime containerd
I1101 10:43:01.463941 2849422 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21830-2847530/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-166274
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-166274: exit status 85 (96.678346ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-166274 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-166274 │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:42:44
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:42:44.368709 2849427 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:42:44.368842 2849427 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:42:44.368854 2849427 out.go:374] Setting ErrFile to fd 2...
	I1101 10:42:44.368859 2849427 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:42:44.369125 2849427 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-2847530/.minikube/bin
	W1101 10:42:44.369254 2849427 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21830-2847530/.minikube/config/config.json: open /home/jenkins/minikube-integration/21830-2847530/.minikube/config/config.json: no such file or directory
	I1101 10:42:44.369646 2849427 out.go:368] Setting JSON to true
	I1101 10:42:44.370466 2849427 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":69910,"bootTime":1761923854,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1101 10:42:44.370532 2849427 start.go:143] virtualization:  
	I1101 10:42:44.374339 2849427 out.go:99] [download-only-166274] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1101 10:42:44.374522 2849427 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21830-2847530/.minikube/cache/preloaded-tarball: no such file or directory
	I1101 10:42:44.374639 2849427 notify.go:221] Checking for updates...
	I1101 10:42:44.378459 2849427 out.go:171] MINIKUBE_LOCATION=21830
	I1101 10:42:44.381466 2849427 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:42:44.384384 2849427 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21830-2847530/kubeconfig
	I1101 10:42:44.387302 2849427 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-2847530/.minikube
	I1101 10:42:44.390172 2849427 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1101 10:42:44.395757 2849427 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1101 10:42:44.396029 2849427 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:42:44.416304 2849427 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 10:42:44.416412 2849427 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:42:44.481553 2849427 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-11-01 10:42:44.472312292 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:42:44.481659 2849427 docker.go:319] overlay module found
	I1101 10:42:44.484635 2849427 out.go:99] Using the docker driver based on user configuration
	I1101 10:42:44.484674 2849427 start.go:309] selected driver: docker
	I1101 10:42:44.484688 2849427 start.go:930] validating driver "docker" against <nil>
	I1101 10:42:44.484792 2849427 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:42:44.536640 2849427 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-11-01 10:42:44.527903566 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:42:44.536794 2849427 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 10:42:44.537084 2849427 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1101 10:42:44.537236 2849427 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1101 10:42:44.540317 2849427 out.go:171] Using Docker driver with root privileges
	I1101 10:42:44.543167 2849427 cni.go:84] Creating CNI manager for ""
	I1101 10:42:44.543242 2849427 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1101 10:42:44.543262 2849427 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 10:42:44.543340 2849427 start.go:353] cluster config:
	{Name:download-only-166274 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-166274 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:42:44.546254 2849427 out.go:99] Starting "download-only-166274" primary control-plane node in "download-only-166274" cluster
	I1101 10:42:44.546270 2849427 cache.go:124] Beginning downloading kic base image for docker with containerd
	I1101 10:42:44.549070 2849427 out.go:99] Pulling base image v0.0.48-1760939008-21773 ...
	I1101 10:42:44.549104 2849427 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1101 10:42:44.549129 2849427 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 10:42:44.564869 2849427 cache.go:153] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1101 10:42:44.565028 2849427 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1101 10:42:44.565145 2849427 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1101 10:42:44.606179 2849427 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1101 10:42:44.606205 2849427 cache.go:59] Caching tarball of preloaded images
	I1101 10:42:44.606367 2849427 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1101 10:42:44.610890 2849427 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1101 10:42:44.610967 2849427 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4 from gcs api...
	I1101 10:42:44.705269 2849427 preload.go:290] Got checksum from GCS API "38d7f581f2fa4226c8af2c9106b982b7"
	I1101 10:42:44.705395 2849427 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:38d7f581f2fa4226c8af2c9106b982b7 -> /home/jenkins/minikube-integration/21830-2847530/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1101 10:42:48.423770 2849427 cache.go:62] Finished verifying existence of preloaded tar for v1.28.0 on containerd
	I1101 10:42:48.424174 2849427 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/download-only-166274/config.json ...
	I1101 10:42:48.424210 2849427 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/download-only-166274/config.json: {Name:mk5945c1dfab7ebf73eec48b43eb4fb4d656128f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:42:48.425200 2849427 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1101 10:42:48.425390 2849427 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/21830-2847530/.minikube/cache/linux/arm64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-166274 host does not exist
	  To start a cluster, run: "minikube start -p download-only-166274"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-166274
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (5.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-159265 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-159265 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (5.149868128s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (5.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1101 10:43:07.059363 2849422 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
I1101 10:43:07.059406 2849422 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21830-2847530/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-159265
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-159265: exit status 85 (86.913822ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-166274 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-166274 │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                 │ minikube             │ jenkins │ v1.37.0 │ 01 Nov 25 10:43 UTC │ 01 Nov 25 10:43 UTC │
	│ delete  │ -p download-only-166274                                                                                                                                                               │ download-only-166274 │ jenkins │ v1.37.0 │ 01 Nov 25 10:43 UTC │ 01 Nov 25 10:43 UTC │
	│ start   │ -o=json --download-only -p download-only-159265 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-159265 │ jenkins │ v1.37.0 │ 01 Nov 25 10:43 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:43:01
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:43:01.954409 2849623 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:43:01.954605 2849623 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:43:01.954632 2849623 out.go:374] Setting ErrFile to fd 2...
	I1101 10:43:01.954651 2849623 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:43:01.954969 2849623 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-2847530/.minikube/bin
	I1101 10:43:01.955463 2849623 out.go:368] Setting JSON to true
	I1101 10:43:01.956404 2849623 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":69928,"bootTime":1761923854,"procs":149,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1101 10:43:01.956508 2849623 start.go:143] virtualization:  
	I1101 10:43:01.960041 2849623 out.go:99] [download-only-159265] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 10:43:01.960377 2849623 notify.go:221] Checking for updates...
	I1101 10:43:01.964280 2849623 out.go:171] MINIKUBE_LOCATION=21830
	I1101 10:43:01.967310 2849623 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:43:01.970296 2849623 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21830-2847530/kubeconfig
	I1101 10:43:01.973311 2849623 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-2847530/.minikube
	I1101 10:43:01.976298 2849623 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1101 10:43:01.981990 2849623 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1101 10:43:01.982473 2849623 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:43:02.014269 2849623 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 10:43:02.014381 2849623 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:43:02.073723 2849623 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-01 10:43:02.064108578 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:43:02.073835 2849623 docker.go:319] overlay module found
	I1101 10:43:02.076917 2849623 out.go:99] Using the docker driver based on user configuration
	I1101 10:43:02.076966 2849623 start.go:309] selected driver: docker
	I1101 10:43:02.076980 2849623 start.go:930] validating driver "docker" against <nil>
	I1101 10:43:02.077091 2849623 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:43:02.132402 2849623 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-01 10:43:02.122627783 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:43:02.132566 2849623 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 10:43:02.132877 2849623 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1101 10:43:02.133032 2849623 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1101 10:43:02.136185 2849623 out.go:171] Using Docker driver with root privileges
	I1101 10:43:02.138937 2849623 cni.go:84] Creating CNI manager for ""
	I1101 10:43:02.139035 2849623 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1101 10:43:02.139051 2849623 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 10:43:02.139160 2849623 start.go:353] cluster config:
	{Name:download-only-159265 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-159265 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:43:02.142861 2849623 out.go:99] Starting "download-only-159265" primary control-plane node in "download-only-159265" cluster
	I1101 10:43:02.142890 2849623 cache.go:124] Beginning downloading kic base image for docker with containerd
	I1101 10:43:02.145641 2849623 out.go:99] Pulling base image v0.0.48-1760939008-21773 ...
	I1101 10:43:02.145678 2849623 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1101 10:43:02.145784 2849623 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 10:43:02.161792 2849623 cache.go:153] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1101 10:43:02.161902 2849623 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1101 10:43:02.161921 2849623 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory, skipping pull
	I1101 10:43:02.161925 2849623 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in cache, skipping pull
	I1101 10:43:02.161932 2849623 cache.go:156] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 as a tarball
	I1101 10:43:02.203503 2849623 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	I1101 10:43:02.203546 2849623 cache.go:59] Caching tarball of preloaded images
	I1101 10:43:02.204328 2849623 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1101 10:43:02.207430 2849623 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1101 10:43:02.207465 2849623 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4 from gcs api...
	I1101 10:43:02.298207 2849623 preload.go:290] Got checksum from GCS API "435977642a202d20ca04f26d87d875a8"
	I1101 10:43:02.298263 2849623 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4?checksum=md5:435977642a202d20ca04f26d87d875a8 -> /home/jenkins/minikube-integration/21830-2847530/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	I1101 10:43:06.472941 2849623 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1101 10:43:06.473359 2849623 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/download-only-159265/config.json ...
	I1101 10:43:06.473395 2849623 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/download-only-159265/config.json: {Name:mkae1566fcb37e44c588f0dda70bb950fc7c4083 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:43:06.474260 2849623 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1101 10:43:06.475086 2849623 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/21830-2847530/.minikube/cache/linux/arm64/v1.34.1/kubectl
	
	
	* The control-plane node download-only-159265 host does not exist
	  To start a cluster, run: "minikube start -p download-only-159265"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-159265
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
I1101 10:43:08.196219 2849422 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-774607 --alsologtostderr --binary-mirror http://127.0.0.1:39085 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-774607" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-774607
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-442433
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-442433: exit status 85 (68.476924ms)

                                                
                                                
-- stdout --
	* Profile "addons-442433" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-442433"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-442433
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-442433: exit status 85 (71.356616ms)

                                                
                                                
-- stdout --
	* Profile "addons-442433" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-442433"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (172.28s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-442433 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-442433 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m52.277917291s)
--- PASS: TestAddons/Setup (172.28s)

                                                
                                    
x
+
TestAddons/serial/Volcano (40.54s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:868: volcano-scheduler stabilized in 74.752521ms
addons_test.go:884: volcano-controller stabilized in 75.188159ms
addons_test.go:876: volcano-admission stabilized in 75.241204ms
addons_test.go:890: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-scheduler-76c996c8bf-tg2qg" [d20878f2-6466-47d1-937e-0673ecbc3078] Running
addons_test.go:890: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003844523s
addons_test.go:894: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-admission-6c447bd768-8sqk2" [44545ce9-b9da-4bf6-96e3-bc8e0ae52980] Running
addons_test.go:894: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.00329966s
addons_test.go:898: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-controllers-6fd4f85cb8-zljf2" [759537d7-e44e-4457-8856-ab4db631ed65] Running
addons_test.go:898: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.004234631s
addons_test.go:903: (dbg) Run:  kubectl --context addons-442433 delete -n volcano-system job volcano-admission-init
addons_test.go:909: (dbg) Run:  kubectl --context addons-442433 create -f testdata/vcjob.yaml
addons_test.go:917: (dbg) Run:  kubectl --context addons-442433 get vcjob -n my-volcano
addons_test.go:935: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:352: "test-job-nginx-0" [bcf7a8d3-ff19-473d-9a87-8db4649e65fe] Pending
helpers_test.go:352: "test-job-nginx-0" [bcf7a8d3-ff19-473d-9a87-8db4649e65fe] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "test-job-nginx-0" [bcf7a8d3-ff19-473d-9a87-8db4649e65fe] Running
addons_test.go:935: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 12.003426037s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-442433 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-442433 addons disable volcano --alsologtostderr -v=1: (11.916882391s)
--- PASS: TestAddons/serial/Volcano (40.54s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-442433 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-442433 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.84s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-442433 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-442433 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [060bcac6-ac8d-4634-b77f-9155a26bc8f8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [060bcac6-ac8d-4634-b77f-9155a26bc8f8] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.004096563s
addons_test.go:694: (dbg) Run:  kubectl --context addons-442433 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-442433 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-442433 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-442433 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.84s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.36s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 4.677861ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-75gxg" [4b131c56-905a-4b6d-8d3f-ea88dd5a05fa] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003176782s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-fcm5h" [d9d77918-db4b-4e15-a4a1-a450c4b50f4d] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004898937s
addons_test.go:392: (dbg) Run:  kubectl --context addons-442433 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-442433 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-442433 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.403418847s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-442433 ip
2025/11/01 10:47:16 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-442433 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.36s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.81s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.722025ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-442433
addons_test.go:332: (dbg) Run:  kubectl --context addons-442433 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-442433 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.81s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.71s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-442433 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-442433 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-442433 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [3691735c-48a6-487c-97a1-c95581f41e3e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [3691735c-48a6-487c-97a1-c95581f41e3e] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003005545s
I1101 10:48:30.402467 2849422 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-442433 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-442433 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-442433 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-442433 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-442433 addons disable ingress-dns --alsologtostderr -v=1: (1.10885886s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-442433 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-442433 addons disable ingress --alsologtostderr -v=1: (7.810133734s)
--- PASS: TestAddons/parallel/Ingress (19.71s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.34s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-k22gc" [09f1800e-729f-4729-8280-b1661ac71700] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.006098275s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-442433 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (5.34s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.88s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 48.358412ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-crqcj" [ca84a1ed-02e4-43bb-aa24-bd1b9d7b769b] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003807581s
addons_test.go:463: (dbg) Run:  kubectl --context addons-442433 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-442433 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.88s)

                                                
                                    
x
+
TestAddons/parallel/CSI (52.8s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1101 10:47:43.010460 2849422 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1101 10:47:43.014562 2849422 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1101 10:47:43.014594 2849422 kapi.go:107] duration metric: took 7.884725ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 7.895071ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-442433 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442433 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442433 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442433 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442433 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442433 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442433 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442433 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442433 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442433 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442433 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442433 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442433 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-442433 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [b7020e3a-67ae-4a18-9abe-f99e9748f189] Pending
helpers_test.go:352: "task-pv-pod" [b7020e3a-67ae-4a18-9abe-f99e9748f189] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [b7020e3a-67ae-4a18-9abe-f99e9748f189] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.00440389s
addons_test.go:572: (dbg) Run:  kubectl --context addons-442433 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-442433 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-442433 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-442433 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-442433 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-442433 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442433 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442433 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442433 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442433 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442433 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442433 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442433 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442433 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442433 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442433 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442433 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442433 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442433 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442433 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442433 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442433 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-442433 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [40662ed6-f072-4637-86cb-72d1e1142c2a] Pending
helpers_test.go:352: "task-pv-pod-restore" [40662ed6-f072-4637-86cb-72d1e1142c2a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [40662ed6-f072-4637-86cb-72d1e1142c2a] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003181685s
addons_test.go:614: (dbg) Run:  kubectl --context addons-442433 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-442433 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-442433 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-442433 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-442433 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-442433 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.941037835s)
--- PASS: TestAddons/parallel/CSI (52.80s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.98s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-442433 --alsologtostderr -v=1
addons_test.go:808: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-442433 --alsologtostderr -v=1: (1.189608474s)
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-6945c6f4d-2h4v6" [942d86b0-b52a-4fe3-ba8b-a137844c8572] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6945c6f4d-2h4v6" [942d86b0-b52a-4fe3-ba8b-a137844c8572] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.003456245s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-442433 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-442433 addons disable headlamp --alsologtostderr -v=1: (5.783173889s)
--- PASS: TestAddons/parallel/Headlamp (17.98s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.59s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-qxfmq" [6105a0e4-c859-422f-8a03-cdb8533a346f] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003456501s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-442433 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.59s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.42s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-442433 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-442433 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442433 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442433 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442433 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442433 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442433 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [f2671072-09c7-44e5-854e-4b1addd66259] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [f2671072-09c7-44e5-854e-4b1addd66259] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [f2671072-09c7-44e5-854e-4b1addd66259] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.00276876s
addons_test.go:967: (dbg) Run:  kubectl --context addons-442433 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-442433 ssh "cat /opt/local-path-provisioner/pvc-9575c754-e6ee-4b89-93bc-e54a3c2671c1_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-442433 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-442433 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-442433 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-442433 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.011736325s)
--- PASS: TestAddons/parallel/LocalPath (52.42s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.76s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-ldkwf" [1dd419d0-cfae-4d26-ad12-6f47c60540c0] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.007707578s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-442433 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.76s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.85s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-f4scv" [23dde701-8bfc-495f-aa92-9d78e79d4fac] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004282636s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-442433 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-442433 addons disable yakd --alsologtostderr -v=1: (5.844085909s)
--- PASS: TestAddons/parallel/Yakd (11.85s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.48s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-442433
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-442433: (12.188556796s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-442433
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-442433
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-442433
--- PASS: TestAddons/StoppedEnableDisable (12.48s)

                                                
                                    
x
+
TestCertOptions (39.3s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-038278 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-038278 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (36.48652618s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-038278 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-038278 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-038278 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-038278" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-038278
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-038278: (2.091574966s)
--- PASS: TestCertOptions (39.30s)

                                                
                                    
x
+
TestCertExpiration (223.39s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-409334 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-409334 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (33.892801638s)
E1101 11:32:40.213496 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/functional-269105/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-409334 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-409334 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (7.029154447s)
helpers_test.go:175: Cleaning up "cert-expiration-409334" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-409334
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-409334: (2.468437362s)
--- PASS: TestCertExpiration (223.39s)

                                                
                                    
x
+
TestForceSystemdFlag (37.42s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-560676 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-560676 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (35.047569513s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-560676 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-560676" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-560676
E1101 11:31:01.180593 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/addons-442433/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-560676: (2.056345527s)
--- PASS: TestForceSystemdFlag (37.42s)

                                                
                                    
x
+
TestForceSystemdEnv (40.23s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-686320 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-686320 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (37.89247527s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-686320 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-686320" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-686320
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-686320: (2.03161092s)
--- PASS: TestForceSystemdEnv (40.23s)

                                                
                                    
x
+
TestDockerEnvContainerd (44.04s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-782033 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-782033 --driver=docker  --container-runtime=containerd: (28.10260907s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-782033"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-782033": (1.041715805s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-chNwp7XAV3E3/agent.2869763" SSH_AGENT_PID="2869764" DOCKER_HOST=ssh://docker@127.0.0.1:36796 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-chNwp7XAV3E3/agent.2869763" SSH_AGENT_PID="2869764" DOCKER_HOST=ssh://docker@127.0.0.1:36796 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-chNwp7XAV3E3/agent.2869763" SSH_AGENT_PID="2869764" DOCKER_HOST=ssh://docker@127.0.0.1:36796 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.282759464s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-chNwp7XAV3E3/agent.2869763" SSH_AGENT_PID="2869764" DOCKER_HOST=ssh://docker@127.0.0.1:36796 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-782033" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-782033
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-782033: (2.165879881s)
--- PASS: TestDockerEnvContainerd (44.04s)

                                                
                                    
x
+
TestErrorSpam/setup (34.12s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-321476 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-321476 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-321476 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-321476 --driver=docker  --container-runtime=containerd: (34.11747185s)
--- PASS: TestErrorSpam/setup (34.12s)

                                                
                                    
x
+
TestErrorSpam/start (0.8s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-321476 --log_dir /tmp/nospam-321476 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-321476 --log_dir /tmp/nospam-321476 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-321476 --log_dir /tmp/nospam-321476 start --dry-run
--- PASS: TestErrorSpam/start (0.80s)

                                                
                                    
x
+
TestErrorSpam/status (1.12s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-321476 --log_dir /tmp/nospam-321476 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-321476 --log_dir /tmp/nospam-321476 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-321476 --log_dir /tmp/nospam-321476 status
--- PASS: TestErrorSpam/status (1.12s)

                                                
                                    
x
+
TestErrorSpam/pause (1.76s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-321476 --log_dir /tmp/nospam-321476 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-321476 --log_dir /tmp/nospam-321476 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-321476 --log_dir /tmp/nospam-321476 pause
--- PASS: TestErrorSpam/pause (1.76s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.79s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-321476 --log_dir /tmp/nospam-321476 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-321476 --log_dir /tmp/nospam-321476 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-321476 --log_dir /tmp/nospam-321476 unpause
--- PASS: TestErrorSpam/unpause (1.79s)

                                                
                                    
x
+
TestErrorSpam/stop (12.28s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-321476 --log_dir /tmp/nospam-321476 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-321476 --log_dir /tmp/nospam-321476 stop: (12.086477167s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-321476 --log_dir /tmp/nospam-321476 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-321476 --log_dir /tmp/nospam-321476 stop
--- PASS: TestErrorSpam/stop (12.28s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21830-2847530/.minikube/files/etc/test/nested/copy/2849422/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (51.13s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-269105 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
E1101 10:51:01.181101 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/addons-442433/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:51:01.187511 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/addons-442433/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:51:01.198910 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/addons-442433/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:51:01.220285 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/addons-442433/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:51:01.261713 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/addons-442433/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:51:01.343165 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/addons-442433/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:51:01.504668 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/addons-442433/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:51:01.826312 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/addons-442433/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:51:02.468340 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/addons-442433/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:51:03.749698 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/addons-442433/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:51:06.311997 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/addons-442433/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:51:11.434166 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/addons-442433/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:51:21.676041 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/addons-442433/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-269105 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (51.133300914s)
--- PASS: TestFunctional/serial/StartWithProxy (51.13s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (7.5s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1101 10:51:30.652163 2849422 config.go:182] Loaded profile config "functional-269105": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-269105 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-269105 --alsologtostderr -v=8: (7.499112334s)
functional_test.go:678: soft start took 7.50162271s for "functional-269105" cluster.
I1101 10:51:38.151614 2849422 config.go:182] Loaded profile config "functional-269105": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (7.50s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-269105 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.54s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-269105 cache add registry.k8s.io/pause:3.1: (1.326283026s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-269105 cache add registry.k8s.io/pause:3.3: (1.183889357s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-269105 cache add registry.k8s.io/pause:latest: (1.026203824s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.54s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-269105 /tmp/TestFunctionalserialCacheCmdcacheadd_local3693730338/001
E1101 10:51:42.158054 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/addons-442433/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 cache add minikube-local-cache-test:functional-269105
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 cache delete minikube-local-cache-test:functional-269105
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-269105
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-269105 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (293.172002ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-linux-arm64 -p functional-269105 cache reload: (1.106983708s)
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 kubectl -- --context functional-269105 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-269105 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (44.47s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-269105 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1101 10:52:23.120024 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/addons-442433/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-269105 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (44.474129461s)
functional_test.go:776: restart took 44.474225055s for "functional-269105" cluster.
I1101 10:52:30.513733 2849422 config.go:182] Loaded profile config "functional-269105": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (44.47s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-269105 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.41s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-269105 logs: (1.414324409s)
--- PASS: TestFunctional/serial/LogsCmd (1.41s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.47s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 logs --file /tmp/TestFunctionalserialLogsFileCmd1069138883/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-269105 logs --file /tmp/TestFunctionalserialLogsFileCmd1069138883/001/logs.txt: (1.464790909s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.47s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.87s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-269105 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-269105
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-269105: exit status 115 (645.605651ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31480 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-269105 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.87s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-269105 config get cpus: exit status 14 (91.056713ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-269105 config get cpus: exit status 14 (69.233593ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-269105 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-269105 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (192.302527ms)

                                                
                                                
-- stdout --
	* [functional-269105] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21830
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21830-2847530/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-2847530/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 10:53:09.897053 2885113 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:53:09.897233 2885113 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:53:09.897266 2885113 out.go:374] Setting ErrFile to fd 2...
	I1101 10:53:09.897296 2885113 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:53:09.897573 2885113 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-2847530/.minikube/bin
	I1101 10:53:09.898095 2885113 out.go:368] Setting JSON to false
	I1101 10:53:09.899134 2885113 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":70536,"bootTime":1761923854,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1101 10:53:09.899225 2885113 start.go:143] virtualization:  
	I1101 10:53:09.902495 2885113 out.go:179] * [functional-269105] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 10:53:09.906211 2885113 out.go:179]   - MINIKUBE_LOCATION=21830
	I1101 10:53:09.906285 2885113 notify.go:221] Checking for updates...
	I1101 10:53:09.910230 2885113 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:53:09.913107 2885113 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21830-2847530/kubeconfig
	I1101 10:53:09.915989 2885113 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-2847530/.minikube
	I1101 10:53:09.918884 2885113 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 10:53:09.921791 2885113 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:53:09.925135 2885113 config.go:182] Loaded profile config "functional-269105": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1101 10:53:09.925808 2885113 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:53:09.956648 2885113 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 10:53:09.956903 2885113 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:53:10.023655 2885113 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-01 10:53:10.006866733 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:53:10.023767 2885113 docker.go:319] overlay module found
	I1101 10:53:10.026793 2885113 out.go:179] * Using the docker driver based on existing profile
	I1101 10:53:10.029105 2885113 start.go:309] selected driver: docker
	I1101 10:53:10.029128 2885113 start.go:930] validating driver "docker" against &{Name:functional-269105 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-269105 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:53:10.029269 2885113 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:53:10.032884 2885113 out.go:203] 
	W1101 10:53:10.035827 2885113 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1101 10:53:10.038788 2885113 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-269105 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-269105 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-269105 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (187.746253ms)

                                                
                                                
-- stdout --
	* [functional-269105] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21830
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21830-2847530/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-2847530/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 10:53:09.716491 2885065 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:53:09.716613 2885065 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:53:09.716618 2885065 out.go:374] Setting ErrFile to fd 2...
	I1101 10:53:09.716623 2885065 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:53:09.717538 2885065 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-2847530/.minikube/bin
	I1101 10:53:09.717929 2885065 out.go:368] Setting JSON to false
	I1101 10:53:09.718915 2885065 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":70536,"bootTime":1761923854,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1101 10:53:09.718971 2885065 start.go:143] virtualization:  
	I1101 10:53:09.722709 2885065 out.go:179] * [functional-269105] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1101 10:53:09.726710 2885065 notify.go:221] Checking for updates...
	I1101 10:53:09.729852 2885065 out.go:179]   - MINIKUBE_LOCATION=21830
	I1101 10:53:09.732924 2885065 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:53:09.735622 2885065 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21830-2847530/kubeconfig
	I1101 10:53:09.738449 2885065 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-2847530/.minikube
	I1101 10:53:09.741304 2885065 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 10:53:09.744138 2885065 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:53:09.747368 2885065 config.go:182] Loaded profile config "functional-269105": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1101 10:53:09.748047 2885065 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:53:09.775969 2885065 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 10:53:09.776142 2885065 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:53:09.830748 2885065 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-01 10:53:09.821700204 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 10:53:09.830859 2885065 docker.go:319] overlay module found
	I1101 10:53:09.833948 2885065 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1101 10:53:09.836913 2885065 start.go:309] selected driver: docker
	I1101 10:53:09.836933 2885065 start.go:930] validating driver "docker" against &{Name:functional-269105 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-269105 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:53:09.837045 2885065 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:53:09.840638 2885065 out.go:203] 
	W1101 10:53:09.843439 2885065 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1101 10:53:09.846206 2885065 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-269105 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-269105 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-nfspr" [9be59b04-c364-4fdf-a10e-638dbe208f23] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-nfspr" [9be59b04-c364-4fdf-a10e-638dbe208f23] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.003175153s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:32389
functional_test.go:1680: http://192.168.49.2:32389: success! body:
Request served by hello-node-connect-7d85dfc575-nfspr

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:32389
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.74s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [326014ed-d47e-4d77-a104-955c88837040] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004388239s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-269105 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-269105 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-269105 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-269105 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [6db83e95-5365-4270-8c23-24e283364c57] Pending
helpers_test.go:352: "sp-pod" [6db83e95-5365-4270-8c23-24e283364c57] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [6db83e95-5365-4270-8c23-24e283364c57] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.002971237s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-269105 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-269105 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-269105 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [824ea4e5-b8e5-47af-a4c8-b1f60840d7a9] Pending
helpers_test.go:352: "sp-pod" [824ea4e5-b8e5-47af-a4c8-b1f60840d7a9] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.004353287s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-269105 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.00s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 ssh -n functional-269105 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 cp functional-269105:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3360034335/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 ssh -n functional-269105 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 ssh -n functional-269105 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.44s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/2849422/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 ssh "sudo cat /etc/test/nested/copy/2849422/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/2849422.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 ssh "sudo cat /etc/ssl/certs/2849422.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/2849422.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 ssh "sudo cat /usr/share/ca-certificates/2849422.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/28494222.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 ssh "sudo cat /etc/ssl/certs/28494222.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/28494222.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 ssh "sudo cat /usr/share/ca-certificates/28494222.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.71s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-269105 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-269105 ssh "sudo systemctl is-active docker": exit status 1 (268.723526ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-269105 ssh "sudo systemctl is-active crio": exit status 1 (276.886186ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-269105 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-269105 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-269105 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 2882431: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-269105 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-269105 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-269105 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [d248d76e-b1f2-4267-b701-ae3f04872d09] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [d248d76e-b1f2-4267-b701-ae3f04872d09] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.00390571s
I1101 10:52:48.623362 2849422 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.42s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-269105 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.101.197.7 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-269105 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-269105 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-269105 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-r6zbp" [b5b677df-d422-4b1e-ba54-054909797306] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-r6zbp" [b5b677df-d422-4b1e-ba54-054909797306] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.004284845s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.25s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "378.070177ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "54.816872ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "367.67555ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "88.680112ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-269105 /tmp/TestFunctionalparallelMountCmdany-port649978180/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1761994386160118502" to /tmp/TestFunctionalparallelMountCmdany-port649978180/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1761994386160118502" to /tmp/TestFunctionalparallelMountCmdany-port649978180/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1761994386160118502" to /tmp/TestFunctionalparallelMountCmdany-port649978180/001/test-1761994386160118502
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-269105 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (478.130915ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1101 10:53:06.640338 2849422 retry.go:31] will retry after 358.616697ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov  1 10:53 created-by-test
-rw-r--r-- 1 docker docker 24 Nov  1 10:53 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov  1 10:53 test-1761994386160118502
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 ssh cat /mount-9p/test-1761994386160118502
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-269105 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [9a7c76ef-0b28-46b9-9195-54443790ea41] Pending
helpers_test.go:352: "busybox-mount" [9a7c76ef-0b28-46b9-9195-54443790ea41] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [9a7c76ef-0b28-46b9-9195-54443790ea41] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [9a7c76ef-0b28-46b9-9195-54443790ea41] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003307257s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-269105 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-269105 /tmp/TestFunctionalparallelMountCmdany-port649978180/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.18s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 service list -o json
functional_test.go:1504: Took "528.085422ms" to run "out/minikube-linux-arm64 -p functional-269105 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:30600
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30600
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-269105 /tmp/TestFunctionalparallelMountCmdspecific-port142291894/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-269105 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (371.526272ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1101 10:53:14.712138 2849422 retry.go:31] will retry after 720.71901ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-269105 /tmp/TestFunctionalparallelMountCmdspecific-port142291894/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-269105 ssh "sudo umount -f /mount-9p": exit status 1 (273.658299ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-269105 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-269105 /tmp/TestFunctionalparallelMountCmdspecific-port142291894/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.15s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-269105 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1285873411/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-269105 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1285873411/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-269105 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1285873411/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-269105 ssh "findmnt -T" /mount1: exit status 1 (560.748463ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1101 10:53:17.054287 2849422 retry.go:31] will retry after 662.446254ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-269105 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-269105 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1285873411/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-269105 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1285873411/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-269105 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1285873411/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-269105 version -o=json --components: (1.172188791s)
--- PASS: TestFunctional/parallel/Version/components (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-269105 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-269105
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:functional-269105
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-269105 image ls --format short --alsologtostderr:
I1101 10:53:28.707103 2888056 out.go:360] Setting OutFile to fd 1 ...
I1101 10:53:28.707282 2888056 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 10:53:28.707313 2888056 out.go:374] Setting ErrFile to fd 2...
I1101 10:53:28.707336 2888056 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 10:53:28.707639 2888056 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-2847530/.minikube/bin
I1101 10:53:28.708327 2888056 config.go:182] Loaded profile config "functional-269105": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1101 10:53:28.708487 2888056 config.go:182] Loaded profile config "functional-269105": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1101 10:53:28.709007 2888056 cli_runner.go:164] Run: docker container inspect functional-269105 --format={{.State.Status}}
I1101 10:53:28.727038 2888056 ssh_runner.go:195] Run: systemctl --version
I1101 10:53:28.727090 2888056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-269105
I1101 10:53:28.743924 2888056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36806 SSHKeyPath:/home/jenkins/minikube-integration/21830-2847530/.minikube/machines/functional-269105/id_rsa Username:docker}
I1101 10:53:28.846437 2888056 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-269105 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/coredns/coredns             │ v1.12.1            │ sha256:138784 │ 20.4MB │
│ registry.k8s.io/kube-proxy                  │ v1.34.1            │ sha256:05baa9 │ 22.8MB │
│ registry.k8s.io/pause                       │ latest             │ sha256:8cb209 │ 71.3kB │
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b │ sha256:b1a8c6 │ 40.6MB │
│ docker.io/library/nginx                     │ alpine             │ sha256:cbad63 │ 23.1MB │
│ registry.k8s.io/etcd                        │ 3.6.4-0            │ sha256:a18947 │ 98.2MB │
│ registry.k8s.io/kube-scheduler              │ v1.34.1            │ sha256:b5f57e │ 15.8MB │
│ docker.io/library/minikube-local-cache-test │ functional-269105  │ sha256:3c5bc7 │ 988B   │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                 │ sha256:ba04bb │ 8.03MB │
│ localhost/my-image                          │ functional-269105  │ sha256:e2da82 │ 831kB  │
│ registry.k8s.io/kube-apiserver              │ v1.34.1            │ sha256:43911e │ 24.6MB │
│ registry.k8s.io/pause                       │ 3.1                │ sha256:8057e0 │ 262kB  │
│ registry.k8s.io/pause                       │ 3.10.1             │ sha256:d7b100 │ 268kB  │
│ registry.k8s.io/pause                       │ 3.3                │ sha256:3d1873 │ 249kB  │
│ docker.io/kicbase/echo-server               │ functional-269105  │ sha256:ce2d2c │ 2.17MB │
│ registry.k8s.io/kube-controller-manager     │ v1.34.1            │ sha256:7eb2c6 │ 20.7MB │
│ docker.io/library/nginx                     │ latest             │ sha256:46fabd │ 58.3MB │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc       │ sha256:1611cd │ 1.94MB │
└─────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-269105 image ls --format table --alsologtostderr:
I1101 10:53:33.134310 2888468 out.go:360] Setting OutFile to fd 1 ...
I1101 10:53:33.134514 2888468 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 10:53:33.134547 2888468 out.go:374] Setting ErrFile to fd 2...
I1101 10:53:33.134568 2888468 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 10:53:33.134841 2888468 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-2847530/.minikube/bin
I1101 10:53:33.135505 2888468 config.go:182] Loaded profile config "functional-269105": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1101 10:53:33.135670 2888468 config.go:182] Loaded profile config "functional-269105": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1101 10:53:33.136213 2888468 cli_runner.go:164] Run: docker container inspect functional-269105 --format={{.State.Status}}
I1101 10:53:33.155275 2888468 ssh_runner.go:195] Run: systemctl --version
I1101 10:53:33.155344 2888468 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-269105
I1101 10:53:33.172560 2888468 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36806 SSHKeyPath:/home/jenkins/minikube-integration/21830-2847530/.minikube/machines/functional-269105/id_rsa Username:docker}
I1101 10:53:33.278533 2888468 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-269105 image ls --format json --alsologtostderr:
[{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-269105"],"size":"2173567"},{"id":"sha256:cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1","repoDigests":["docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"23117513"},{"id":"sha256:e2da82407ddb2329ca6ac127089697c9b22d09fa84f96f5e85b71c8b294dfcdd","repoDigests":[],"repoTags":["localhost/my-image:functional-269105"],"size":"830616"},{"id":"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"20392204"},{"id":"sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b9
d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"24571109"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:46fabdd7f288c91a57f5d5fe12a02a41fbe855142469fcd50cbe885229064797","repoDigests":["docker.io/library/nginx@sha256:f547e3d0d5d02f7009737b284abc87d808e4252b42dceea361811e9fc606287f"],"repoTags":["docker.io/library/nginx:latest"],"size":"58267312"},{"id":"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"267939"},{"id":"sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14
dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"40636774"},{"id":"sha256:3c5bc7f4325c68637140b9ec680f6695b9eccdaab078ba0f47b5115daf671522","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-269105"],"size":"988"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1ddda
d8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","repoDigests":["registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"98207481"},{"id":"sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"20720058"},{"id":"sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"22788047"},{"id":"sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0","repoDigests":["registry.k8s.io/kube-sch
eduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"15779817"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-269105 image ls --format json --alsologtostderr:
I1101 10:53:32.908151 2888431 out.go:360] Setting OutFile to fd 1 ...
I1101 10:53:32.908337 2888431 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 10:53:32.908368 2888431 out.go:374] Setting ErrFile to fd 2...
I1101 10:53:32.908387 2888431 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 10:53:32.908645 2888431 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-2847530/.minikube/bin
I1101 10:53:32.909300 2888431 config.go:182] Loaded profile config "functional-269105": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1101 10:53:32.909481 2888431 config.go:182] Loaded profile config "functional-269105": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1101 10:53:32.909971 2888431 cli_runner.go:164] Run: docker container inspect functional-269105 --format={{.State.Status}}
I1101 10:53:32.927170 2888431 ssh_runner.go:195] Run: systemctl --version
I1101 10:53:32.927226 2888431 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-269105
I1101 10:53:32.943614 2888431 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36806 SSHKeyPath:/home/jenkins/minikube-integration/21830-2847530/.minikube/machines/functional-269105/id_rsa Username:docker}
I1101 10:53:33.047504 2888431 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-269105 image ls --format yaml --alsologtostderr:
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-269105
size: "2173567"
- id: sha256:3c5bc7f4325c68637140b9ec680f6695b9eccdaab078ba0f47b5115daf671522
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-269105
size: "988"
- id: sha256:cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1
repoDigests:
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "23117513"
- id: sha256:46fabdd7f288c91a57f5d5fe12a02a41fbe855142469fcd50cbe885229064797
repoDigests:
- docker.io/library/nginx@sha256:f547e3d0d5d02f7009737b284abc87d808e4252b42dceea361811e9fc606287f
repoTags:
- docker.io/library/nginx:latest
size: "58267312"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "24571109"
- id: sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "15779817"
- id: sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "267939"
- id: sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "40636774"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:e2da82407ddb2329ca6ac127089697c9b22d09fa84f96f5e85b71c8b294dfcdd
repoDigests: []
repoTags:
- localhost/my-image:functional-269105
size: "830616"
- id: sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "22788047"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "20392204"
- id: sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests:
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "98207481"
- id: sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "20720058"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-269105 image ls --format yaml --alsologtostderr:
I1101 10:53:32.672228 2888393 out.go:360] Setting OutFile to fd 1 ...
I1101 10:53:32.672369 2888393 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 10:53:32.672375 2888393 out.go:374] Setting ErrFile to fd 2...
I1101 10:53:32.672379 2888393 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 10:53:32.672610 2888393 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-2847530/.minikube/bin
I1101 10:53:32.673248 2888393 config.go:182] Loaded profile config "functional-269105": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1101 10:53:32.673372 2888393 config.go:182] Loaded profile config "functional-269105": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1101 10:53:32.673832 2888393 cli_runner.go:164] Run: docker container inspect functional-269105 --format={{.State.Status}}
I1101 10:53:32.692508 2888393 ssh_runner.go:195] Run: systemctl --version
I1101 10:53:32.692559 2888393 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-269105
I1101 10:53:32.712492 2888393 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36806 SSHKeyPath:/home/jenkins/minikube-integration/21830-2847530/.minikube/machines/functional-269105/id_rsa Username:docker}
I1101 10:53:32.818203 2888393 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-269105 ssh pgrep buildkitd: exit status 1 (279.435701ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 image build -t localhost/my-image:functional-269105 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-269105 image build -t localhost/my-image:functional-269105 testdata/build --alsologtostderr: (3.081104233s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-269105 image build -t localhost/my-image:functional-269105 testdata/build --alsologtostderr:
I1101 10:53:29.373711 2888199 out.go:360] Setting OutFile to fd 1 ...
I1101 10:53:29.375086 2888199 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 10:53:29.375104 2888199 out.go:374] Setting ErrFile to fd 2...
I1101 10:53:29.375110 2888199 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 10:53:29.375430 2888199 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-2847530/.minikube/bin
I1101 10:53:29.376242 2888199 config.go:182] Loaded profile config "functional-269105": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1101 10:53:29.379482 2888199 config.go:182] Loaded profile config "functional-269105": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1101 10:53:29.380031 2888199 cli_runner.go:164] Run: docker container inspect functional-269105 --format={{.State.Status}}
I1101 10:53:29.397988 2888199 ssh_runner.go:195] Run: systemctl --version
I1101 10:53:29.398043 2888199 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-269105
I1101 10:53:29.415947 2888199 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36806 SSHKeyPath:/home/jenkins/minikube-integration/21830-2847530/.minikube/machines/functional-269105/id_rsa Username:docker}
I1101 10:53:29.522378 2888199 build_images.go:162] Building image from path: /tmp/build.3473380510.tar
I1101 10:53:29.522468 2888199 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1101 10:53:29.530526 2888199 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3473380510.tar
I1101 10:53:29.534008 2888199 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3473380510.tar: stat -c "%s %y" /var/lib/minikube/build/build.3473380510.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3473380510.tar': No such file or directory
I1101 10:53:29.534081 2888199 ssh_runner.go:362] scp /tmp/build.3473380510.tar --> /var/lib/minikube/build/build.3473380510.tar (3072 bytes)
I1101 10:53:29.553527 2888199 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3473380510
I1101 10:53:29.561616 2888199 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3473380510 -xf /var/lib/minikube/build/build.3473380510.tar
I1101 10:53:29.569331 2888199 containerd.go:394] Building image: /var/lib/minikube/build/build.3473380510
I1101 10:53:29.569402 2888199 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3473380510 --local dockerfile=/var/lib/minikube/build/build.3473380510 --output type=image,name=localhost/my-image:functional-269105
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.6s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:44bf39153c60f18251fc585ba3307806bd277fb21eebdbc1a2899ae90c54fe06
#8 exporting manifest sha256:44bf39153c60f18251fc585ba3307806bd277fb21eebdbc1a2899ae90c54fe06 0.0s done
#8 exporting config sha256:e2da82407ddb2329ca6ac127089697c9b22d09fa84f96f5e85b71c8b294dfcdd 0.0s done
#8 naming to localhost/my-image:functional-269105 done
#8 DONE 0.2s
I1101 10:53:32.360473 2888199 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3473380510 --local dockerfile=/var/lib/minikube/build/build.3473380510 --output type=image,name=localhost/my-image:functional-269105: (2.791043247s)
I1101 10:53:32.360549 2888199 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3473380510
I1101 10:53:32.372982 2888199 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3473380510.tar
I1101 10:53:32.382830 2888199 build_images.go:218] Built localhost/my-image:functional-269105 from /tmp/build.3473380510.tar
I1101 10:53:32.382871 2888199 build_images.go:134] succeeded building to: functional-269105
I1101 10:53:32.382876 2888199 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-269105
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 image load --daemon kicbase/echo-server:functional-269105 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 image load --daemon kicbase/echo-server:functional-269105 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-269105
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 image load --daemon kicbase/echo-server:functional-269105 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 image save kicbase/echo-server:functional-269105 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 image rm kicbase/echo-server:functional-269105 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-269105
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 image save --daemon kicbase/echo-server:functional-269105 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-269105
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-269105 update-context --alsologtostderr -v=2
E1101 10:53:45.042130 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/addons-442433/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:56:01.180836 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/addons-442433/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:56:28.888743 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/addons-442433/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-269105
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-269105
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-269105
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (176.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1101 11:01:01.181201 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/addons-442433/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-533626 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (2m55.452836677s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (176.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-533626 kubectl -- rollout status deployment/busybox: (3.74977467s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 kubectl -- exec busybox-7b57f96db7-2q8mw -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 kubectl -- exec busybox-7b57f96db7-xfskz -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 kubectl -- exec busybox-7b57f96db7-zwccw -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 kubectl -- exec busybox-7b57f96db7-2q8mw -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 kubectl -- exec busybox-7b57f96db7-xfskz -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 kubectl -- exec busybox-7b57f96db7-zwccw -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 kubectl -- exec busybox-7b57f96db7-2q8mw -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 kubectl -- exec busybox-7b57f96db7-xfskz -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 kubectl -- exec busybox-7b57f96db7-zwccw -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 kubectl -- exec busybox-7b57f96db7-2q8mw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 kubectl -- exec busybox-7b57f96db7-2q8mw -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 kubectl -- exec busybox-7b57f96db7-xfskz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 kubectl -- exec busybox-7b57f96db7-xfskz -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 kubectl -- exec busybox-7b57f96db7-zwccw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 kubectl -- exec busybox-7b57f96db7-zwccw -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (29.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-533626 node add --alsologtostderr -v 5: (27.96575741s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-533626 status --alsologtostderr -v 5: (1.044825881s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (29.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-533626 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.105184302s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-533626 status --output json --alsologtostderr -v 5: (1.047077919s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 cp testdata/cp-test.txt ha-533626:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 ssh -n ha-533626 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 cp ha-533626:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2092420517/001/cp-test_ha-533626.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 ssh -n ha-533626 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 cp ha-533626:/home/docker/cp-test.txt ha-533626-m02:/home/docker/cp-test_ha-533626_ha-533626-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 ssh -n ha-533626 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 ssh -n ha-533626-m02 "sudo cat /home/docker/cp-test_ha-533626_ha-533626-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 cp ha-533626:/home/docker/cp-test.txt ha-533626-m03:/home/docker/cp-test_ha-533626_ha-533626-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 ssh -n ha-533626 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 ssh -n ha-533626-m03 "sudo cat /home/docker/cp-test_ha-533626_ha-533626-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 cp ha-533626:/home/docker/cp-test.txt ha-533626-m04:/home/docker/cp-test_ha-533626_ha-533626-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 ssh -n ha-533626 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 ssh -n ha-533626-m04 "sudo cat /home/docker/cp-test_ha-533626_ha-533626-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 cp testdata/cp-test.txt ha-533626-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 ssh -n ha-533626-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 cp ha-533626-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2092420517/001/cp-test_ha-533626-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 ssh -n ha-533626-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 cp ha-533626-m02:/home/docker/cp-test.txt ha-533626:/home/docker/cp-test_ha-533626-m02_ha-533626.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 ssh -n ha-533626-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 ssh -n ha-533626 "sudo cat /home/docker/cp-test_ha-533626-m02_ha-533626.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 cp ha-533626-m02:/home/docker/cp-test.txt ha-533626-m03:/home/docker/cp-test_ha-533626-m02_ha-533626-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 ssh -n ha-533626-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 ssh -n ha-533626-m03 "sudo cat /home/docker/cp-test_ha-533626-m02_ha-533626-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 cp ha-533626-m02:/home/docker/cp-test.txt ha-533626-m04:/home/docker/cp-test_ha-533626-m02_ha-533626-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 ssh -n ha-533626-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 ssh -n ha-533626-m04 "sudo cat /home/docker/cp-test_ha-533626-m02_ha-533626-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 cp testdata/cp-test.txt ha-533626-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 ssh -n ha-533626-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 cp ha-533626-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2092420517/001/cp-test_ha-533626-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 ssh -n ha-533626-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 cp ha-533626-m03:/home/docker/cp-test.txt ha-533626:/home/docker/cp-test_ha-533626-m03_ha-533626.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 ssh -n ha-533626-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 ssh -n ha-533626 "sudo cat /home/docker/cp-test_ha-533626-m03_ha-533626.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 cp ha-533626-m03:/home/docker/cp-test.txt ha-533626-m02:/home/docker/cp-test_ha-533626-m03_ha-533626-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 ssh -n ha-533626-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 ssh -n ha-533626-m02 "sudo cat /home/docker/cp-test_ha-533626-m03_ha-533626-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 cp ha-533626-m03:/home/docker/cp-test.txt ha-533626-m04:/home/docker/cp-test_ha-533626-m03_ha-533626-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 ssh -n ha-533626-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 ssh -n ha-533626-m04 "sudo cat /home/docker/cp-test_ha-533626-m03_ha-533626-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 cp testdata/cp-test.txt ha-533626-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 ssh -n ha-533626-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 cp ha-533626-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2092420517/001/cp-test_ha-533626-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 ssh -n ha-533626-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 cp ha-533626-m04:/home/docker/cp-test.txt ha-533626:/home/docker/cp-test_ha-533626-m04_ha-533626.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 ssh -n ha-533626-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 ssh -n ha-533626 "sudo cat /home/docker/cp-test_ha-533626-m04_ha-533626.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 cp ha-533626-m04:/home/docker/cp-test.txt ha-533626-m02:/home/docker/cp-test_ha-533626-m04_ha-533626-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 ssh -n ha-533626-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 ssh -n ha-533626-m02 "sudo cat /home/docker/cp-test_ha-533626-m04_ha-533626-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 cp ha-533626-m04:/home/docker/cp-test.txt ha-533626-m03:/home/docker/cp-test_ha-533626-m04_ha-533626-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 ssh -n ha-533626-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 ssh -n ha-533626-m03 "sudo cat /home/docker/cp-test_ha-533626-m04_ha-533626-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-533626 node stop m02 --alsologtostderr -v 5: (12.114500226s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-533626 status --alsologtostderr -v 5: exit status 7 (795.557822ms)

                                                
                                                
-- stdout --
	ha-533626
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-533626-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-533626-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-533626-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 11:02:22.578740 2905476 out.go:360] Setting OutFile to fd 1 ...
	I1101 11:02:22.578889 2905476 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 11:02:22.578901 2905476 out.go:374] Setting ErrFile to fd 2...
	I1101 11:02:22.578906 2905476 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 11:02:22.581622 2905476 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-2847530/.minikube/bin
	I1101 11:02:22.582075 2905476 out.go:368] Setting JSON to false
	I1101 11:02:22.582119 2905476 mustload.go:66] Loading cluster: ha-533626
	I1101 11:02:22.582180 2905476 notify.go:221] Checking for updates...
	I1101 11:02:22.583432 2905476 config.go:182] Loaded profile config "ha-533626": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1101 11:02:22.583457 2905476 status.go:174] checking status of ha-533626 ...
	I1101 11:02:22.584260 2905476 cli_runner.go:164] Run: docker container inspect ha-533626 --format={{.State.Status}}
	I1101 11:02:22.606195 2905476 status.go:371] ha-533626 host status = "Running" (err=<nil>)
	I1101 11:02:22.606225 2905476 host.go:66] Checking if "ha-533626" exists ...
	I1101 11:02:22.606573 2905476 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-533626
	I1101 11:02:22.631629 2905476 host.go:66] Checking if "ha-533626" exists ...
	I1101 11:02:22.631947 2905476 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 11:02:22.632000 2905476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-533626
	I1101 11:02:22.654987 2905476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36811 SSHKeyPath:/home/jenkins/minikube-integration/21830-2847530/.minikube/machines/ha-533626/id_rsa Username:docker}
	I1101 11:02:22.761932 2905476 ssh_runner.go:195] Run: systemctl --version
	I1101 11:02:22.768342 2905476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 11:02:22.781120 2905476 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 11:02:22.838393 2905476 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-11-01 11:02:22.828242729 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 11:02:22.838934 2905476 kubeconfig.go:125] found "ha-533626" server: "https://192.168.49.254:8443"
	I1101 11:02:22.838979 2905476 api_server.go:166] Checking apiserver status ...
	I1101 11:02:22.839031 2905476 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:02:22.851662 2905476 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1477/cgroup
	I1101 11:02:22.860167 2905476 api_server.go:182] apiserver freezer: "9:freezer:/docker/1347bbfef21806149bb56d33b51fe97165db9e325f7d160f41a8c2db693eef63/kubepods/burstable/podfe023768100972a4be18322c7509025b/84a8190873aba38ca1e810121bb2b439eb829967e779e95f89c2f0352df93f5b"
	I1101 11:02:22.860252 2905476 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/1347bbfef21806149bb56d33b51fe97165db9e325f7d160f41a8c2db693eef63/kubepods/burstable/podfe023768100972a4be18322c7509025b/84a8190873aba38ca1e810121bb2b439eb829967e779e95f89c2f0352df93f5b/freezer.state
	I1101 11:02:22.867745 2905476 api_server.go:204] freezer state: "THAWED"
	I1101 11:02:22.867781 2905476 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1101 11:02:22.876130 2905476 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1101 11:02:22.876158 2905476 status.go:463] ha-533626 apiserver status = Running (err=<nil>)
	I1101 11:02:22.876169 2905476 status.go:176] ha-533626 status: &{Name:ha-533626 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 11:02:22.876186 2905476 status.go:174] checking status of ha-533626-m02 ...
	I1101 11:02:22.876508 2905476 cli_runner.go:164] Run: docker container inspect ha-533626-m02 --format={{.State.Status}}
	I1101 11:02:22.898085 2905476 status.go:371] ha-533626-m02 host status = "Stopped" (err=<nil>)
	I1101 11:02:22.898110 2905476 status.go:384] host is not running, skipping remaining checks
	I1101 11:02:22.898118 2905476 status.go:176] ha-533626-m02 status: &{Name:ha-533626-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 11:02:22.898138 2905476 status.go:174] checking status of ha-533626-m03 ...
	I1101 11:02:22.898459 2905476 cli_runner.go:164] Run: docker container inspect ha-533626-m03 --format={{.State.Status}}
	I1101 11:02:22.923572 2905476 status.go:371] ha-533626-m03 host status = "Running" (err=<nil>)
	I1101 11:02:22.923595 2905476 host.go:66] Checking if "ha-533626-m03" exists ...
	I1101 11:02:22.924012 2905476 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-533626-m03
	I1101 11:02:22.941304 2905476 host.go:66] Checking if "ha-533626-m03" exists ...
	I1101 11:02:22.941619 2905476 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 11:02:22.941668 2905476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-533626-m03
	I1101 11:02:22.958643 2905476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36821 SSHKeyPath:/home/jenkins/minikube-integration/21830-2847530/.minikube/machines/ha-533626-m03/id_rsa Username:docker}
	I1101 11:02:23.069139 2905476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 11:02:23.083128 2905476 kubeconfig.go:125] found "ha-533626" server: "https://192.168.49.254:8443"
	I1101 11:02:23.083170 2905476 api_server.go:166] Checking apiserver status ...
	I1101 11:02:23.083250 2905476 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:02:23.097185 2905476 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1338/cgroup
	I1101 11:02:23.105701 2905476 api_server.go:182] apiserver freezer: "9:freezer:/docker/75ac9e8280f05d7623fcc69163f1361943dbe585509ea152ba3e6fc7394b73ba/kubepods/burstable/pod22d2b23b17e800713e321b4f0e8339a9/abb1e164ff858bd17303af1b5f2f08b76c3d27a8e5536223cc99916b272da744"
	I1101 11:02:23.105786 2905476 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/75ac9e8280f05d7623fcc69163f1361943dbe585509ea152ba3e6fc7394b73ba/kubepods/burstable/pod22d2b23b17e800713e321b4f0e8339a9/abb1e164ff858bd17303af1b5f2f08b76c3d27a8e5536223cc99916b272da744/freezer.state
	I1101 11:02:23.116332 2905476 api_server.go:204] freezer state: "THAWED"
	I1101 11:02:23.116428 2905476 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1101 11:02:23.124928 2905476 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1101 11:02:23.124959 2905476 status.go:463] ha-533626-m03 apiserver status = Running (err=<nil>)
	I1101 11:02:23.124968 2905476 status.go:176] ha-533626-m03 status: &{Name:ha-533626-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 11:02:23.124985 2905476 status.go:174] checking status of ha-533626-m04 ...
	I1101 11:02:23.125318 2905476 cli_runner.go:164] Run: docker container inspect ha-533626-m04 --format={{.State.Status}}
	I1101 11:02:23.144615 2905476 status.go:371] ha-533626-m04 host status = "Running" (err=<nil>)
	I1101 11:02:23.144651 2905476 host.go:66] Checking if "ha-533626-m04" exists ...
	I1101 11:02:23.144940 2905476 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-533626-m04
	I1101 11:02:23.164105 2905476 host.go:66] Checking if "ha-533626-m04" exists ...
	I1101 11:02:23.164423 2905476 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 11:02:23.164462 2905476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-533626-m04
	I1101 11:02:23.181932 2905476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36826 SSHKeyPath:/home/jenkins/minikube-integration/21830-2847530/.minikube/machines/ha-533626-m04/id_rsa Username:docker}
	I1101 11:02:23.285250 2905476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 11:02:23.301411 2905476 status.go:176] ha-533626-m04 status: &{Name:ha-533626-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (14.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-533626 node start m02 --alsologtostderr -v 5: (12.95705565s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-533626 status --alsologtostderr -v 5: (1.374054928s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (14.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
E1101 11:02:40.205909 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/functional-269105/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:02:40.214681 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/functional-269105/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:02:40.226178 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/functional-269105/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:02:40.247625 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/functional-269105/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:02:40.289101 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/functional-269105/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:02:40.370519 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/functional-269105/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.858841791s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (98.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 stop --alsologtostderr -v 5
E1101 11:02:40.531713 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/functional-269105/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:02:40.853546 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/functional-269105/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:02:41.495653 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/functional-269105/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:02:42.777673 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/functional-269105/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:02:45.339143 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/functional-269105/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:02:50.461383 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/functional-269105/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:03:00.703656 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/functional-269105/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-533626 stop --alsologtostderr -v 5: (37.76916063s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 start --wait true --alsologtostderr -v 5
E1101 11:03:21.185720 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/functional-269105/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:04:02.147416 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/functional-269105/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-533626 start --wait true --alsologtostderr -v 5: (1m0.364676383s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (98.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-533626 node delete m03 --alsologtostderr -v 5: (9.685137741s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-533626 stop --alsologtostderr -v 5: (36.33949111s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-533626 status --alsologtostderr -v 5: exit status 7 (113.646804ms)

                                                
                                                
-- stdout --
	ha-533626
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-533626-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-533626-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 11:05:06.612392 2920439 out.go:360] Setting OutFile to fd 1 ...
	I1101 11:05:06.612526 2920439 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 11:05:06.612537 2920439 out.go:374] Setting ErrFile to fd 2...
	I1101 11:05:06.612542 2920439 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 11:05:06.612793 2920439 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-2847530/.minikube/bin
	I1101 11:05:06.613965 2920439 out.go:368] Setting JSON to false
	I1101 11:05:06.614016 2920439 mustload.go:66] Loading cluster: ha-533626
	I1101 11:05:06.614076 2920439 notify.go:221] Checking for updates...
	I1101 11:05:06.615053 2920439 config.go:182] Loaded profile config "ha-533626": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1101 11:05:06.615078 2920439 status.go:174] checking status of ha-533626 ...
	I1101 11:05:06.615574 2920439 cli_runner.go:164] Run: docker container inspect ha-533626 --format={{.State.Status}}
	I1101 11:05:06.633167 2920439 status.go:371] ha-533626 host status = "Stopped" (err=<nil>)
	I1101 11:05:06.633192 2920439 status.go:384] host is not running, skipping remaining checks
	I1101 11:05:06.633198 2920439 status.go:176] ha-533626 status: &{Name:ha-533626 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 11:05:06.633237 2920439 status.go:174] checking status of ha-533626-m02 ...
	I1101 11:05:06.633537 2920439 cli_runner.go:164] Run: docker container inspect ha-533626-m02 --format={{.State.Status}}
	I1101 11:05:06.652939 2920439 status.go:371] ha-533626-m02 host status = "Stopped" (err=<nil>)
	I1101 11:05:06.652962 2920439 status.go:384] host is not running, skipping remaining checks
	I1101 11:05:06.652968 2920439 status.go:176] ha-533626-m02 status: &{Name:ha-533626-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 11:05:06.652987 2920439 status.go:174] checking status of ha-533626-m04 ...
	I1101 11:05:06.653271 2920439 cli_runner.go:164] Run: docker container inspect ha-533626-m04 --format={{.State.Status}}
	I1101 11:05:06.678418 2920439 status.go:371] ha-533626-m04 host status = "Stopped" (err=<nil>)
	I1101 11:05:06.678442 2920439 status.go:384] host is not running, skipping remaining checks
	I1101 11:05:06.678449 2920439 status.go:176] ha-533626-m04 status: &{Name:ha-533626-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (60.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1101 11:05:24.069059 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/functional-269105/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:06:01.180741 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/addons-442433/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-533626 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (59.568634533s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (60.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (79.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 node add --control-plane --alsologtostderr -v 5
E1101 11:07:24.250878 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/addons-442433/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-533626 node add --control-plane --alsologtostderr -v 5: (1m18.725656873s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-533626 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-533626 status --alsologtostderr -v 5: (1.063682204s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (79.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.093593583s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.09s)

                                                
                                    
x
+
TestJSONOutput/start/Command (48.27s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-616842 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
E1101 11:07:40.206444 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/functional-269105/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:08:07.910796 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/functional-269105/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-616842 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (48.262489052s)
--- PASS: TestJSONOutput/start/Command (48.27s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.71s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-616842 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.71s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.61s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-616842 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.61s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.95s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-616842 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-616842 --output=json --user=testUser: (5.954608879s)
--- PASS: TestJSONOutput/stop/Command (5.95s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-538991 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-538991 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (90.862912ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d4de9ee3-51b8-4eed-9a28-1c3fbc5ab80c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-538991] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"325d34b6-9b06-42e6-b0e3-b2bd17b64783","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21830"}}
	{"specversion":"1.0","id":"5cbc44ca-30bd-4cd8-8c62-eea1d4f5f703","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d6d6e1aa-9b1d-4f62-904c-45e55467f5b5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21830-2847530/kubeconfig"}}
	{"specversion":"1.0","id":"6d93204a-7dfe-4403-8fd0-9c7597464467","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-2847530/.minikube"}}
	{"specversion":"1.0","id":"4c4fecf8-bf55-4295-9321-13c8cc0bd35f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"a7696352-84c3-48c8-b3b3-da98b4005585","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"5dc6ef6b-fdd6-45b8-aeb2-b24ba833ff30","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-538991" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-538991
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (39.6s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-868996 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-868996 --network=: (37.335295893s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-868996" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-868996
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-868996: (2.241029164s)
--- PASS: TestKicCustomNetwork/create_custom_network (39.60s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (36.15s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-182424 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-182424 --network=bridge: (34.049537971s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-182424" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-182424
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-182424: (2.084306068s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (36.15s)

                                                
                                    
x
+
TestKicExistingNetwork (36.27s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1101 11:09:53.651158 2849422 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1101 11:09:53.668662 2849422 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1101 11:09:53.668738 2849422 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1101 11:09:53.668756 2849422 cli_runner.go:164] Run: docker network inspect existing-network
W1101 11:09:53.685212 2849422 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1101 11:09:53.685251 2849422 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1101 11:09:53.685266 2849422 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1101 11:09:53.685363 2849422 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1101 11:09:53.702385 2849422 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1006bc31d72c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8a:c0:fc:76:40:11} reservation:<nil>}
I1101 11:09:53.702690 2849422 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001c1ac90}
I1101 11:09:53.702718 2849422 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1101 11:09:53.702770 2849422 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1101 11:09:53.760888 2849422 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-912581 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-912581 --network=existing-network: (34.011085301s)
helpers_test.go:175: Cleaning up "existing-network-912581" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-912581
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-912581: (2.119330139s)
I1101 11:10:29.907779 2849422 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (36.27s)

                                                
                                    
x
+
TestKicCustomSubnet (39.09s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-070469 --subnet=192.168.60.0/24
E1101 11:11:01.180630 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/addons-442433/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-070469 --subnet=192.168.60.0/24: (36.803157672s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-070469 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-070469" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-070469
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-070469: (2.266983973s)
--- PASS: TestKicCustomSubnet (39.09s)

                                                
                                    
x
+
TestKicStaticIP (37.09s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-786860 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-786860 --static-ip=192.168.200.200: (34.642136436s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-786860 ip
helpers_test.go:175: Cleaning up "static-ip-786860" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-786860
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-786860: (2.283021486s)
--- PASS: TestKicStaticIP (37.09s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (72.62s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-441419 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-441419 --driver=docker  --container-runtime=containerd: (33.240376833s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-443971 --driver=docker  --container-runtime=containerd
E1101 11:12:40.211990 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/functional-269105/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-443971 --driver=docker  --container-runtime=containerd: (33.571731615s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-441419
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-443971
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-443971" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-443971
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-443971: (2.069737382s)
helpers_test.go:175: Cleaning up "first-441419" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-441419
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-441419: (2.326955541s)
--- PASS: TestMinikubeProfile (72.62s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.36s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-551551 --memory=3072 --mount-string /tmp/TestMountStartserial3239432309/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-551551 --memory=3072 --mount-string /tmp/TestMountStartserial3239432309/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (9.360708572s)
--- PASS: TestMountStart/serial/StartWithMountFirst (10.36s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-551551 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.46s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-553767 --memory=3072 --mount-string /tmp/TestMountStartserial3239432309/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-553767 --memory=3072 --mount-string /tmp/TestMountStartserial3239432309/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (8.460097716s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.46s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-553767 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.69s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-551551 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-551551 --alsologtostderr -v=5: (1.686818173s)
--- PASS: TestMountStart/serial/DeleteFirst (1.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-553767 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-553767
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-553767: (1.282836343s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.88s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-553767
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-553767: (6.883424721s)
--- PASS: TestMountStart/serial/RestartStopped (7.88s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-553767 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (103.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-846529 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-846529 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m42.59984379s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-846529 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (103.15s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-846529 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-846529 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-846529 -- rollout status deployment/busybox: (3.010749807s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-846529 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-846529 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-846529 -- exec busybox-7b57f96db7-95ngn -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-846529 -- exec busybox-7b57f96db7-dslwt -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-846529 -- exec busybox-7b57f96db7-95ngn -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-846529 -- exec busybox-7b57f96db7-dslwt -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-846529 -- exec busybox-7b57f96db7-95ngn -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-846529 -- exec busybox-7b57f96db7-dslwt -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.99s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-846529 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-846529 -- exec busybox-7b57f96db7-95ngn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-846529 -- exec busybox-7b57f96db7-95ngn -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-846529 -- exec busybox-7b57f96db7-dslwt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-846529 -- exec busybox-7b57f96db7-dslwt -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.97s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (26.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-846529 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-846529 -v=5 --alsologtostderr: (25.31733775s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-846529 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (26.03s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-846529 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.72s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-846529 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-846529 cp testdata/cp-test.txt multinode-846529:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-846529 ssh -n multinode-846529 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-846529 cp multinode-846529:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile170455486/001/cp-test_multinode-846529.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-846529 ssh -n multinode-846529 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-846529 cp multinode-846529:/home/docker/cp-test.txt multinode-846529-m02:/home/docker/cp-test_multinode-846529_multinode-846529-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-846529 ssh -n multinode-846529 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-846529 ssh -n multinode-846529-m02 "sudo cat /home/docker/cp-test_multinode-846529_multinode-846529-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-846529 cp multinode-846529:/home/docker/cp-test.txt multinode-846529-m03:/home/docker/cp-test_multinode-846529_multinode-846529-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-846529 ssh -n multinode-846529 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-846529 ssh -n multinode-846529-m03 "sudo cat /home/docker/cp-test_multinode-846529_multinode-846529-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-846529 cp testdata/cp-test.txt multinode-846529-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-846529 ssh -n multinode-846529-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-846529 cp multinode-846529-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile170455486/001/cp-test_multinode-846529-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-846529 ssh -n multinode-846529-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-846529 cp multinode-846529-m02:/home/docker/cp-test.txt multinode-846529:/home/docker/cp-test_multinode-846529-m02_multinode-846529.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-846529 ssh -n multinode-846529-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-846529 ssh -n multinode-846529 "sudo cat /home/docker/cp-test_multinode-846529-m02_multinode-846529.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-846529 cp multinode-846529-m02:/home/docker/cp-test.txt multinode-846529-m03:/home/docker/cp-test_multinode-846529-m02_multinode-846529-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-846529 ssh -n multinode-846529-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-846529 ssh -n multinode-846529-m03 "sudo cat /home/docker/cp-test_multinode-846529-m02_multinode-846529-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-846529 cp testdata/cp-test.txt multinode-846529-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-846529 ssh -n multinode-846529-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-846529 cp multinode-846529-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile170455486/001/cp-test_multinode-846529-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-846529 ssh -n multinode-846529-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-846529 cp multinode-846529-m03:/home/docker/cp-test.txt multinode-846529:/home/docker/cp-test_multinode-846529-m03_multinode-846529.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-846529 ssh -n multinode-846529-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-846529 ssh -n multinode-846529 "sudo cat /home/docker/cp-test_multinode-846529-m03_multinode-846529.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-846529 cp multinode-846529-m03:/home/docker/cp-test.txt multinode-846529-m02:/home/docker/cp-test_multinode-846529-m03_multinode-846529-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-846529 ssh -n multinode-846529-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-846529 ssh -n multinode-846529-m02 "sudo cat /home/docker/cp-test_multinode-846529-m03_multinode-846529-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.30s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-846529 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-846529 node stop m03: (1.334908046s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-846529 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-846529 status: exit status 7 (843.004724ms)

                                                
                                                
-- stdout --
	multinode-846529
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-846529-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-846529-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-846529 status --alsologtostderr
E1101 11:16:01.181197 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/addons-442433/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-846529 status --alsologtostderr: exit status 7 (573.513129ms)

                                                
                                                
-- stdout --
	multinode-846529
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-846529-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-846529-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 11:16:00.911827 2974164 out.go:360] Setting OutFile to fd 1 ...
	I1101 11:16:00.912096 2974164 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 11:16:00.912113 2974164 out.go:374] Setting ErrFile to fd 2...
	I1101 11:16:00.912118 2974164 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 11:16:00.912401 2974164 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-2847530/.minikube/bin
	I1101 11:16:00.912608 2974164 out.go:368] Setting JSON to false
	I1101 11:16:00.912635 2974164 mustload.go:66] Loading cluster: multinode-846529
	I1101 11:16:00.913033 2974164 config.go:182] Loaded profile config "multinode-846529": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1101 11:16:00.913050 2974164 status.go:174] checking status of multinode-846529 ...
	I1101 11:16:00.913598 2974164 cli_runner.go:164] Run: docker container inspect multinode-846529 --format={{.State.Status}}
	I1101 11:16:00.913816 2974164 notify.go:221] Checking for updates...
	I1101 11:16:00.937133 2974164 status.go:371] multinode-846529 host status = "Running" (err=<nil>)
	I1101 11:16:00.937159 2974164 host.go:66] Checking if "multinode-846529" exists ...
	I1101 11:16:00.937457 2974164 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-846529
	I1101 11:16:00.958046 2974164 host.go:66] Checking if "multinode-846529" exists ...
	I1101 11:16:00.958345 2974164 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 11:16:00.959037 2974164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-846529
	I1101 11:16:00.977035 2974164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36931 SSHKeyPath:/home/jenkins/minikube-integration/21830-2847530/.minikube/machines/multinode-846529/id_rsa Username:docker}
	I1101 11:16:01.081251 2974164 ssh_runner.go:195] Run: systemctl --version
	I1101 11:16:01.087643 2974164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 11:16:01.101576 2974164 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 11:16:01.170272 2974164 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-01 11:16:01.159586981 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 11:16:01.170905 2974164 kubeconfig.go:125] found "multinode-846529" server: "https://192.168.67.2:8443"
	I1101 11:16:01.170949 2974164 api_server.go:166] Checking apiserver status ...
	I1101 11:16:01.171010 2974164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:16:01.187810 2974164 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1434/cgroup
	I1101 11:16:01.199941 2974164 api_server.go:182] apiserver freezer: "9:freezer:/docker/3959a8829c5b37e84ea09338447a9212ae99b747e6396f779b185a9d65d9c5d3/kubepods/burstable/pod6000004edf46fe3716715513ef763d34/f1b9c91d336cd5a8b3c148c1d535f2e2d83b90cf4fd3072039c23aa45761050b"
	I1101 11:16:01.200092 2974164 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/3959a8829c5b37e84ea09338447a9212ae99b747e6396f779b185a9d65d9c5d3/kubepods/burstable/pod6000004edf46fe3716715513ef763d34/f1b9c91d336cd5a8b3c148c1d535f2e2d83b90cf4fd3072039c23aa45761050b/freezer.state
	I1101 11:16:01.210373 2974164 api_server.go:204] freezer state: "THAWED"
	I1101 11:16:01.210404 2974164 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1101 11:16:01.219927 2974164 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1101 11:16:01.219961 2974164 status.go:463] multinode-846529 apiserver status = Running (err=<nil>)
	I1101 11:16:01.219978 2974164 status.go:176] multinode-846529 status: &{Name:multinode-846529 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 11:16:01.220006 2974164 status.go:174] checking status of multinode-846529-m02 ...
	I1101 11:16:01.220333 2974164 cli_runner.go:164] Run: docker container inspect multinode-846529-m02 --format={{.State.Status}}
	I1101 11:16:01.240360 2974164 status.go:371] multinode-846529-m02 host status = "Running" (err=<nil>)
	I1101 11:16:01.240388 2974164 host.go:66] Checking if "multinode-846529-m02" exists ...
	I1101 11:16:01.240693 2974164 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-846529-m02
	I1101 11:16:01.261157 2974164 host.go:66] Checking if "multinode-846529-m02" exists ...
	I1101 11:16:01.261529 2974164 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 11:16:01.261578 2974164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-846529-m02
	I1101 11:16:01.282515 2974164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36936 SSHKeyPath:/home/jenkins/minikube-integration/21830-2847530/.minikube/machines/multinode-846529-m02/id_rsa Username:docker}
	I1101 11:16:01.394190 2974164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 11:16:01.409610 2974164 status.go:176] multinode-846529-m02 status: &{Name:multinode-846529-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1101 11:16:01.409645 2974164 status.go:174] checking status of multinode-846529-m03 ...
	I1101 11:16:01.409994 2974164 cli_runner.go:164] Run: docker container inspect multinode-846529-m03 --format={{.State.Status}}
	I1101 11:16:01.429363 2974164 status.go:371] multinode-846529-m03 host status = "Stopped" (err=<nil>)
	I1101 11:16:01.429398 2974164 status.go:384] host is not running, skipping remaining checks
	I1101 11:16:01.429405 2974164 status.go:176] multinode-846529-m03 status: &{Name:multinode-846529-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.75s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-846529 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-846529 node start m03 -v=5 --alsologtostderr: (6.971320372s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-846529 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.74s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (83.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-846529
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-846529
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-846529: (25.104168186s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-846529 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-846529 --wait=true -v=5 --alsologtostderr: (58.365560124s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-846529
--- PASS: TestMultiNode/serial/RestartKeepsNodes (83.59s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-846529 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-846529 node delete m03: (4.965867273s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-846529 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.67s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-846529 stop
E1101 11:17:40.205890 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/functional-269105/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-846529 stop: (23.9140051s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-846529 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-846529 status: exit status 7 (91.323877ms)

                                                
                                                
-- stdout --
	multinode-846529
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-846529-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-846529 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-846529 status --alsologtostderr: exit status 7 (91.365558ms)

                                                
                                                
-- stdout --
	multinode-846529
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-846529-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 11:18:02.481789 2983007 out.go:360] Setting OutFile to fd 1 ...
	I1101 11:18:02.481983 2983007 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 11:18:02.481997 2983007 out.go:374] Setting ErrFile to fd 2...
	I1101 11:18:02.482003 2983007 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 11:18:02.482312 2983007 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-2847530/.minikube/bin
	I1101 11:18:02.482544 2983007 out.go:368] Setting JSON to false
	I1101 11:18:02.482588 2983007 mustload.go:66] Loading cluster: multinode-846529
	I1101 11:18:02.482665 2983007 notify.go:221] Checking for updates...
	I1101 11:18:02.483030 2983007 config.go:182] Loaded profile config "multinode-846529": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1101 11:18:02.483049 2983007 status.go:174] checking status of multinode-846529 ...
	I1101 11:18:02.483929 2983007 cli_runner.go:164] Run: docker container inspect multinode-846529 --format={{.State.Status}}
	I1101 11:18:02.502577 2983007 status.go:371] multinode-846529 host status = "Stopped" (err=<nil>)
	I1101 11:18:02.502604 2983007 status.go:384] host is not running, skipping remaining checks
	I1101 11:18:02.502611 2983007 status.go:176] multinode-846529 status: &{Name:multinode-846529 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 11:18:02.502639 2983007 status.go:174] checking status of multinode-846529-m02 ...
	I1101 11:18:02.502946 2983007 cli_runner.go:164] Run: docker container inspect multinode-846529-m02 --format={{.State.Status}}
	I1101 11:18:02.525541 2983007 status.go:371] multinode-846529-m02 host status = "Stopped" (err=<nil>)
	I1101 11:18:02.525561 2983007 status.go:384] host is not running, skipping remaining checks
	I1101 11:18:02.525576 2983007 status.go:176] multinode-846529-m02 status: &{Name:multinode-846529-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.10s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (54.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-846529 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-846529 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (53.779732862s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-846529 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (54.45s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (35.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-846529
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-846529-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-846529-m02 --driver=docker  --container-runtime=containerd: exit status 14 (89.761429ms)

                                                
                                                
-- stdout --
	* [multinode-846529-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21830
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21830-2847530/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-2847530/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-846529-m02' is duplicated with machine name 'multinode-846529-m02' in profile 'multinode-846529'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-846529-m03 --driver=docker  --container-runtime=containerd
E1101 11:19:03.272027 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/functional-269105/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-846529-m03 --driver=docker  --container-runtime=containerd: (33.390748224s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-846529
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-846529: exit status 80 (329.631571ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-846529 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-846529-m03 already exists in multinode-846529-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_2.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-846529-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-846529-m03: (2.127253325s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (35.99s)

                                                
                                    
x
+
TestPreload (117.47s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-026271 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-026271 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0: (1m1.177916102s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-026271 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-arm64 -p test-preload-026271 image pull gcr.io/k8s-minikube/busybox: (2.342067437s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-026271
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-026271: (1.343819125s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-026271 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
E1101 11:21:01.180835 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/addons-442433/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-026271 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (49.907589847s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-026271 image list
helpers_test.go:175: Cleaning up "test-preload-026271" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-026271
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-026271: (2.452697025s)
--- PASS: TestPreload (117.47s)

                                                
                                    
x
+
TestScheduledStopUnix (112.18s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-323014 --memory=3072 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-323014 --memory=3072 --driver=docker  --container-runtime=containerd: (35.670534288s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-323014 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-323014 -n scheduled-stop-323014
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-323014 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1101 11:22:10.838373 2849422 retry.go:31] will retry after 98.396µs: open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/scheduled-stop-323014/pid: no such file or directory
I1101 11:22:10.839462 2849422 retry.go:31] will retry after 211.779µs: open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/scheduled-stop-323014/pid: no such file or directory
I1101 11:22:10.840058 2849422 retry.go:31] will retry after 246.304µs: open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/scheduled-stop-323014/pid: no such file or directory
I1101 11:22:10.840479 2849422 retry.go:31] will retry after 270.754µs: open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/scheduled-stop-323014/pid: no such file or directory
I1101 11:22:10.841652 2849422 retry.go:31] will retry after 540.263µs: open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/scheduled-stop-323014/pid: no such file or directory
I1101 11:22:10.842769 2849422 retry.go:31] will retry after 433.332µs: open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/scheduled-stop-323014/pid: no such file or directory
I1101 11:22:10.843905 2849422 retry.go:31] will retry after 935.713µs: open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/scheduled-stop-323014/pid: no such file or directory
I1101 11:22:10.845048 2849422 retry.go:31] will retry after 1.24854ms: open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/scheduled-stop-323014/pid: no such file or directory
I1101 11:22:10.847309 2849422 retry.go:31] will retry after 1.539438ms: open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/scheduled-stop-323014/pid: no such file or directory
I1101 11:22:10.849466 2849422 retry.go:31] will retry after 4.864238ms: open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/scheduled-stop-323014/pid: no such file or directory
I1101 11:22:10.854642 2849422 retry.go:31] will retry after 6.000813ms: open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/scheduled-stop-323014/pid: no such file or directory
I1101 11:22:10.860904 2849422 retry.go:31] will retry after 7.662752ms: open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/scheduled-stop-323014/pid: no such file or directory
I1101 11:22:10.869128 2849422 retry.go:31] will retry after 9.770625ms: open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/scheduled-stop-323014/pid: no such file or directory
I1101 11:22:10.879332 2849422 retry.go:31] will retry after 27.658809ms: open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/scheduled-stop-323014/pid: no such file or directory
I1101 11:22:10.907567 2849422 retry.go:31] will retry after 28.506494ms: open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/scheduled-stop-323014/pid: no such file or directory
I1101 11:22:10.936821 2849422 retry.go:31] will retry after 27.419591ms: open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/scheduled-stop-323014/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-323014 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-323014 -n scheduled-stop-323014
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-323014
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-323014 --schedule 15s
E1101 11:22:40.205922 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/functional-269105/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-323014
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-323014: exit status 7 (64.721263ms)

                                                
                                                
-- stdout --
	scheduled-stop-323014
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-323014 -n scheduled-stop-323014
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-323014 -n scheduled-stop-323014: exit status 7 (68.535299ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-323014" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-323014
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-323014: (4.921479116s)
--- PASS: TestScheduledStopUnix (112.18s)

                                                
                                    
x
+
TestInsufficientStorage (13.63s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-793128 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-793128 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (11.09712473s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"35110ba0-89d4-4fc6-b4c5-d22cd4438ec5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-793128] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"3f2feb72-19fa-455a-9c8f-3fbac9cadedf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21830"}}
	{"specversion":"1.0","id":"28ef4877-4614-4014-9736-7cd395658f06","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e7bc26ee-92ad-489f-bb93-e6d91865669c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21830-2847530/kubeconfig"}}
	{"specversion":"1.0","id":"c72647e3-0b22-4ca7-b2b0-9714d0f4aa15","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-2847530/.minikube"}}
	{"specversion":"1.0","id":"bbfd9819-1749-4c37-8885-e90d340fc5b0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"7a6953fb-ef62-43d2-a491-20f9337e34e6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"ae1f2dcd-157a-4880-a274-063bf5f34d25","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"fab85970-88a5-4fb6-a13a-4f60d89780f5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"c712560d-b433-4e33-8706-d79d2b3c938d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"c01ae8ed-cc59-4a8e-b9d9-c0cea1cbf019","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"2482ad97-a908-4801-9270-138b4c62c396","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-793128\" primary control-plane node in \"insufficient-storage-793128\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"d7712a1e-fa5b-48bb-a773-a78ee1e1a2d0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1760939008-21773 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"810f3b93-ec44-4c73-873a-bf5f9b26b0c8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"ff03e3b4-219e-4569-92ac-29884fefbb56","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-793128 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-793128 --output=json --layout=cluster: exit status 7 (305.545239ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-793128","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-793128","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1101 11:23:38.216630 3001600 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-793128" does not appear in /home/jenkins/minikube-integration/21830-2847530/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-793128 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-793128 --output=json --layout=cluster: exit status 7 (294.349574ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-793128","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-793128","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1101 11:23:38.512785 3001670 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-793128" does not appear in /home/jenkins/minikube-integration/21830-2847530/kubeconfig
	E1101 11:23:38.522577 3001670 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/insufficient-storage-793128/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-793128" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-793128
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-793128: (1.935602301s)
--- PASS: TestInsufficientStorage (13.63s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (55.96s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
E1101 11:27:40.204944 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/functional-269105/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.699470211 start -p running-upgrade-225018 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.699470211 start -p running-upgrade-225018 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (28.837873605s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-225018 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-225018 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (24.448695226s)
helpers_test.go:175: Cleaning up "running-upgrade-225018" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-225018
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-225018: (1.963928622s)
--- PASS: TestRunningBinaryUpgrade (55.96s)

                                                
                                    
x
+
TestMissingContainerUpgrade (159.89s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.1275639207 start -p missing-upgrade-279177 --memory=3072 --driver=docker  --container-runtime=containerd
E1101 11:24:04.252957 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/addons-442433/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.1275639207 start -p missing-upgrade-279177 --memory=3072 --driver=docker  --container-runtime=containerd: (1m5.086947415s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-279177
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-279177
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-279177 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-279177 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m30.418682248s)
helpers_test.go:175: Cleaning up "missing-upgrade-279177" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-279177
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-279177: (2.147772299s)
--- PASS: TestMissingContainerUpgrade (159.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-330680 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-330680 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 14 (97.816926ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-330680] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21830
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21830-2847530/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-2847530/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (48.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-330680 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-330680 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (47.707183092s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-330680 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (48.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (10.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-330680 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-330680 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (7.378466211s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-330680 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-330680 status -o json: exit status 2 (483.925513ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-330680","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-330680
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-330680: (2.459177126s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (10.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-330680 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-330680 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (9.231714098s)
--- PASS: TestNoKubernetes/serial/Start (9.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-330680 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-330680 "sudo systemctl is-active --quiet service kubelet": exit status 1 (267.337219ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.68s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.68s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-330680
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-330680: (1.285463529s)
--- PASS: TestNoKubernetes/serial/Stop (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-330680 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-330680 --driver=docker  --container-runtime=containerd: (8.004410646s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-330680 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-330680 "sudo systemctl is-active --quiet service kubelet": exit status 1 (277.022258ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.68s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.68s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (75.24s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.3459625566 start -p stopped-upgrade-731444 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.3459625566 start -p stopped-upgrade-731444 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (34.299731138s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.3459625566 -p stopped-upgrade-731444 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.3459625566 -p stopped-upgrade-731444 stop: (1.28914601s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-731444 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-731444 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (39.648481081s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (75.24s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.37s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-731444
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-731444: (1.371008575s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.37s)

                                                
                                    
x
+
TestPause/serial/Start (81.55s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-111012 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-111012 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m21.54700724s)
--- PASS: TestPause/serial/Start (81.55s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.09s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-111012 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-111012 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.080647173s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.09s)

                                                
                                    
x
+
TestPause/serial/Pause (0.71s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-111012 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.71s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.34s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-111012 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-111012 --output=json --layout=cluster: exit status 2 (341.524434ms)

                                                
                                                
-- stdout --
	{"Name":"pause-111012","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-111012","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.34s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.63s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-111012 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.63s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.84s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-111012 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.84s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.41s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-111012 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-111012 --alsologtostderr -v=5: (2.409521964s)
--- PASS: TestPause/serial/DeletePaused (2.41s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (15.95s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (15.887741288s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-111012
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-111012: exit status 1 (19.361528ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-111012: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (15.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-921290 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-921290 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (189.796091ms)

                                                
                                                
-- stdout --
	* [false-921290] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21830
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21830-2847530/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-2847530/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 11:31:06.229894 3037945 out.go:360] Setting OutFile to fd 1 ...
	I1101 11:31:06.230062 3037945 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 11:31:06.230093 3037945 out.go:374] Setting ErrFile to fd 2...
	I1101 11:31:06.230114 3037945 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 11:31:06.230392 3037945 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-2847530/.minikube/bin
	I1101 11:31:06.230823 3037945 out.go:368] Setting JSON to false
	I1101 11:31:06.231942 3037945 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":72812,"bootTime":1761923854,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1101 11:31:06.232034 3037945 start.go:143] virtualization:  
	I1101 11:31:06.235654 3037945 out.go:179] * [false-921290] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1101 11:31:06.239498 3037945 out.go:179]   - MINIKUBE_LOCATION=21830
	I1101 11:31:06.239613 3037945 notify.go:221] Checking for updates...
	I1101 11:31:06.245451 3037945 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 11:31:06.248484 3037945 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21830-2847530/kubeconfig
	I1101 11:31:06.251329 3037945 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-2847530/.minikube
	I1101 11:31:06.254275 3037945 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 11:31:06.258491 3037945 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 11:31:06.261958 3037945 config.go:182] Loaded profile config "kubernetes-upgrade-847244": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1101 11:31:06.262105 3037945 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 11:31:06.290325 3037945 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1101 11:31:06.290444 3037945 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 11:31:06.347105 3037945 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-01 11:31:06.33826553 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1101 11:31:06.347222 3037945 docker.go:319] overlay module found
	I1101 11:31:06.350383 3037945 out.go:179] * Using the docker driver based on user configuration
	I1101 11:31:06.353209 3037945 start.go:309] selected driver: docker
	I1101 11:31:06.353229 3037945 start.go:930] validating driver "docker" against <nil>
	I1101 11:31:06.353243 3037945 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 11:31:06.356841 3037945 out.go:203] 
	W1101 11:31:06.359703 3037945 out.go:285] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1101 11:31:06.362539 3037945 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-921290 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-921290

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-921290

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-921290

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-921290

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-921290

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-921290

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-921290

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-921290

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-921290

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-921290

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-921290"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-921290"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-921290"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-921290

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-921290"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-921290"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-921290" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-921290" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-921290" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-921290" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-921290" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-921290" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-921290" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-921290" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-921290"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-921290"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-921290"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-921290"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-921290"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-921290" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-921290" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-921290" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-921290"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-921290"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-921290"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-921290"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-921290"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21830-2847530/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 01 Nov 2025 11:26:23 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-847244
contexts:
- context:
cluster: kubernetes-upgrade-847244
extensions:
- extension:
last-update: Sat, 01 Nov 2025 11:26:23 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-847244
name: kubernetes-upgrade-847244
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-847244
user:
client-certificate: /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/kubernetes-upgrade-847244/client.crt
client-key: /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/kubernetes-upgrade-847244/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-921290

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-921290"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-921290"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-921290"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-921290"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-921290"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-921290"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-921290"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-921290"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-921290"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-921290"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-921290"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-921290"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-921290"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-921290"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-921290"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-921290"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-921290"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-921290"

                                                
                                                
----------------------- debugLogs end: false-921290 [took: 3.224797115s] --------------------------------
helpers_test.go:175: Cleaning up "false-921290" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-921290
--- PASS: TestNetworkPlugins/group/false (3.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (61.69s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-422756 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-422756 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (1m1.686272569s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (61.69s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (78.71s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-338889 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-338889 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (1m18.711756277s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (78.71s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.53s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-422756 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [79ca436a-62eb-4d13-9875-02f8eb2cf581] Pending
helpers_test.go:352: "busybox" [79ca436a-62eb-4d13-9875-02f8eb2cf581] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1101 11:35:43.273802 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/functional-269105/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [79ca436a-62eb-4d13-9875-02f8eb2cf581] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.003827207s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-422756 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.53s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-422756 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-422756 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.327038172s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-422756 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-422756 --alsologtostderr -v=3
E1101 11:36:01.181021 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/addons-442433/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-422756 --alsologtostderr -v=3: (12.286583074s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-422756 -n old-k8s-version-422756
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-422756 -n old-k8s-version-422756: exit status 7 (80.748197ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-422756 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (57.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-422756 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-422756 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (56.970794053s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-422756 -n old-k8s-version-422756
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (57.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-338889 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [5c1a0e96-60d1-4f68-b00c-d3aedfd87158] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [5c1a0e96-60d1-4f68-b00c-d3aedfd87158] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003603566s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-338889 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-bjkg6" [ddb59537-5dc7-45c5-b036-4470e35abf44] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003929723s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-338889 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-338889 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.01059788s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-338889 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-338889 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-338889 --alsologtostderr -v=3: (12.13057404s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-bjkg6" [ddb59537-5dc7-45c5-b036-4470e35abf44] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004107076s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-422756 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-422756 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.9s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-422756 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-422756 -n old-k8s-version-422756
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-422756 -n old-k8s-version-422756: exit status 2 (338.513404ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-422756 -n old-k8s-version-422756
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-422756 -n old-k8s-version-422756: exit status 2 (333.473893ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-422756 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-422756 -n old-k8s-version-422756
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-422756 -n old-k8s-version-422756
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.90s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-338889 -n default-k8s-diff-port-338889
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-338889 -n default-k8s-diff-port-338889: exit status 7 (98.888658ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-338889 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (54.87s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-338889 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-338889 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (54.483635462s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-338889 -n default-k8s-diff-port-338889
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (54.87s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (88.64s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-741605 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
E1101 11:37:40.205706 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/functional-269105/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-741605 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (1m28.639683126s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (88.64s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-gcz82" [ef957015-e8f4-43a3-9487-8ecd37e6ffa5] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003216726s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-gcz82" [ef957015-e8f4-43a3-9487-8ecd37e6ffa5] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004095242s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-338889 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-338889 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-338889 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-338889 -n default-k8s-diff-port-338889
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-338889 -n default-k8s-diff-port-338889: exit status 2 (325.273243ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-338889 -n default-k8s-diff-port-338889
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-338889 -n default-k8s-diff-port-338889: exit status 2 (337.674061ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-338889 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-338889 -n default-k8s-diff-port-338889
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-338889 -n default-k8s-diff-port-338889
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (60.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-578416 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-578416 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (1m0.371996356s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (60.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-741605 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [d486d938-be15-40dc-a72d-3a7f7e52e7e3] Pending
helpers_test.go:352: "busybox" [d486d938-be15-40dc-a72d-3a7f7e52e7e3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [d486d938-be15-40dc-a72d-3a7f7e52e7e3] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004299268s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-741605 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.48s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-741605 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-741605 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.2972894s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-741605 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.48s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.64s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-741605 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-741605 --alsologtostderr -v=3: (12.64219071s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.64s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-741605 -n embed-certs-741605
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-741605 -n embed-certs-741605: exit status 7 (96.531836ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-741605 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (50.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-741605 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-741605 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (49.745279641s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-741605 -n embed-certs-741605
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (50.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.43s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-578416 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [7c7de002-0429-4bfa-a537-285bcaf07837] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [7c7de002-0429-4bfa-a537-285bcaf07837] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.007675395s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-578416 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.43s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-578416 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-578416 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.142050532s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-578416 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-578416 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-578416 --alsologtostderr -v=3: (12.194289372s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-578416 -n no-preload-578416
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-578416 -n no-preload-578416: exit status 7 (71.44851ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-578416 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (50.78s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-578416 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-578416 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (50.339644195s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-578416 -n no-preload-578416
E1101 11:40:44.254646 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/addons-442433/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (50.78s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-gkrb2" [7d424899-fb45-4f6f-9c10-da2e0072705e] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003392499s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-gkrb2" [7d424899-fb45-4f6f-9c10-da2e0072705e] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003378072s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-741605 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-741605 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (4.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-741605 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p embed-certs-741605 --alsologtostderr -v=1: (1.071799043s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-741605 -n embed-certs-741605
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-741605 -n embed-certs-741605: exit status 2 (460.334043ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-741605 -n embed-certs-741605
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-741605 -n embed-certs-741605: exit status 2 (416.656617ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-741605 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-741605 -n embed-certs-741605
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-741605 -n embed-certs-741605
--- PASS: TestStartStop/group/embed-certs/serial/Pause (4.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (43.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-029603 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
E1101 11:40:41.008728 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/old-k8s-version-422756/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:40:41.015107 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/old-k8s-version-422756/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:40:41.026378 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/old-k8s-version-422756/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:40:41.047705 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/old-k8s-version-422756/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:40:41.089009 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/old-k8s-version-422756/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:40:41.170354 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/old-k8s-version-422756/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:40:41.332383 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/old-k8s-version-422756/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:40:41.653986 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/old-k8s-version-422756/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:40:42.295412 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/old-k8s-version-422756/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:40:43.577221 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/old-k8s-version-422756/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-029603 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (43.303632761s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (43.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-mbdl6" [2203236a-5bb6-4e60-a5f2-33d123d643f2] Running
E1101 11:40:46.139510 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/old-k8s-version-422756/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00323238s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-mbdl6" [2203236a-5bb6-4e60-a5f2-33d123d643f2] Running
E1101 11:40:51.261034 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/old-k8s-version-422756/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003106756s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-578416 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-578416 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.91s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-578416 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p no-preload-578416 --alsologtostderr -v=1: (1.02275708s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-578416 -n no-preload-578416
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-578416 -n no-preload-578416: exit status 2 (419.60352ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-578416 -n no-preload-578416
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-578416 -n no-preload-578416: exit status 2 (455.668905ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-578416 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-578416 -n no-preload-578416
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-578416 -n no-preload-578416
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (58.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-921290 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-921290 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (58.307506493s)
--- PASS: TestNetworkPlugins/group/auto/Start (58.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-029603 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-029603 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.247475097s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.6s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-029603 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-029603 --alsologtostderr -v=3: (3.59689349s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.60s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-029603 -n newest-cni-029603
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-029603 -n newest-cni-029603: exit status 7 (93.114022ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-029603 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (24.05s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-029603 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
E1101 11:41:21.984159 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/old-k8s-version-422756/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-029603 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (23.486419872s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-029603 -n newest-cni-029603
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (24.05s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-029603 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (4.45s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-029603 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p newest-cni-029603 --alsologtostderr -v=1: (1.128634602s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-029603 -n newest-cni-029603
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-029603 -n newest-cni-029603: exit status 2 (520.657769ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-029603 -n newest-cni-029603
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-029603 -n newest-cni-029603: exit status 2 (485.49621ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-029603 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 unpause -p newest-cni-029603 --alsologtostderr -v=1: (1.017751841s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-029603 -n newest-cni-029603
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-029603 -n newest-cni-029603
--- PASS: TestStartStop/group/newest-cni/serial/Pause (4.45s)
E1101 11:47:23.872386 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/default-k8s-diff-port-338889/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (84.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-921290 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
E1101 11:41:56.169416 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/default-k8s-diff-port-338889/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:41:56.175796 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/default-k8s-diff-port-338889/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:41:56.187704 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/default-k8s-diff-port-338889/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:41:56.209149 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/default-k8s-diff-port-338889/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:41:56.250543 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/default-k8s-diff-port-338889/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:41:56.332107 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/default-k8s-diff-port-338889/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:41:56.493754 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/default-k8s-diff-port-338889/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:41:56.815796 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/default-k8s-diff-port-338889/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:41:57.457327 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/default-k8s-diff-port-338889/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:41:58.739582 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/default-k8s-diff-port-338889/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:42:01.301781 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/default-k8s-diff-port-338889/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:42:02.945955 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/old-k8s-version-422756/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-921290 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m24.09212982s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (84.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-921290 "pgrep -a kubelet"
I1101 11:42:03.684780 2849422 config.go:182] Loaded profile config "auto-921290": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-921290 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-8sn4d" [8036f63e-dfb5-447b-bbba-a8852ff8f910] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1101 11:42:06.423722 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/default-k8s-diff-port-338889/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-8sn4d" [8036f63e-dfb5-447b-bbba-a8852ff8f910] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.003723304s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-921290 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-921290 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-921290 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (52.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-921290 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
E1101 11:42:40.205822 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/functional-269105/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-921290 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (52.479754004s)
--- PASS: TestNetworkPlugins/group/calico/Start (52.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-w6hfm" [43ef0c1f-b8c3-446d-9964-37c2ed8490ca] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004361986s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-921290 "pgrep -a kubelet"
I1101 11:43:13.104496 2849422 config.go:182] Loaded profile config "kindnet-921290": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-921290 replace --force -f testdata/netcat-deployment.yaml
I1101 11:43:13.517954 2849422 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-84dzl" [a831256e-6ad7-42fa-9376-518ffbf8fb97] Pending
helpers_test.go:352: "netcat-cd4db9dbf-84dzl" [a831256e-6ad7-42fa-9376-518ffbf8fb97] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1101 11:43:18.108838 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/default-k8s-diff-port-338889/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-84dzl" [a831256e-6ad7-42fa-9376-518ffbf8fb97] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004195816s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-921290 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-921290 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E1101 11:43:24.867578 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/old-k8s-version-422756/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-921290 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-j7fmx" [d53cbf32-e4b7-448e-99c9-1513088ebf3d] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004053189s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-921290 "pgrep -a kubelet"
I1101 11:43:36.157629 2849422 config.go:182] Loaded profile config "calico-921290": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-921290 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-h92tl" [1505fcce-5e89-43a8-9857-f3b2604171ee] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-h92tl" [1505fcce-5e89-43a8-9857-f3b2604171ee] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004722329s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-921290 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (70.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-921290 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-921290 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (1m10.10470382s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (70.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-921290 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-921290 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (79.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-921290 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
E1101 11:44:32.005148 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/no-preload-578416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:44:32.011529 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/no-preload-578416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:44:32.022908 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/no-preload-578416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:44:32.044287 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/no-preload-578416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:44:32.085618 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/no-preload-578416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:44:32.166982 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/no-preload-578416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:44:32.328507 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/no-preload-578416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:44:32.649992 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/no-preload-578416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:44:33.291609 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/no-preload-578416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:44:34.572920 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/no-preload-578416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:44:37.134262 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/no-preload-578416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:44:40.030093 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/default-k8s-diff-port-338889/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:44:42.255617 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/no-preload-578416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:44:52.497570 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/no-preload-578416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-921290 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m19.463489999s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (79.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-921290 "pgrep -a kubelet"
I1101 11:44:58.150012 2849422 config.go:182] Loaded profile config "custom-flannel-921290": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-921290 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-58xg5" [73a39bf4-29f1-4a98-b925-19a62432efcc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-58xg5" [73a39bf4-29f1-4a98-b925-19a62432efcc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.003334719s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-921290 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-921290 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-921290 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (65.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-921290 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-921290 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (1m5.545373134s)
--- PASS: TestNetworkPlugins/group/flannel/Start (65.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-921290 "pgrep -a kubelet"
I1101 11:45:34.194810 2849422 config.go:182] Loaded profile config "enable-default-cni-921290": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-921290 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-ktm2l" [cebec93a-3802-45db-9319-0219ce9014c2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-ktm2l" [cebec93a-3802-45db-9319-0219ce9014c2] Running
E1101 11:45:41.008384 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/old-k8s-version-422756/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.009659517s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-921290 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-921290 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-921290 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (73.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-921290 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-921290 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m13.48950232s)
--- PASS: TestNetworkPlugins/group/bridge/Start (73.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-kp75c" [60ef9ffc-8e84-44d0-9825-0459fe44c789] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003602562s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-921290 "pgrep -a kubelet"
I1101 11:46:43.496985 2849422 config.go:182] Loaded profile config "flannel-921290": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-921290 replace --force -f testdata/netcat-deployment.yaml
I1101 11:46:43.835845 2849422 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-6xpfc" [04a95b04-27cb-4284-962d-2bc6f88f7de5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-6xpfc" [04a95b04-27cb-4284-962d-2bc6f88f7de5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.003280267s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-921290 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-921290 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-921290 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-921290 "pgrep -a kubelet"
E1101 11:47:24.490880 2849422 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/auto-921290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I1101 11:47:24.723502 2849422 config.go:182] Loaded profile config "bridge-921290": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-921290 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-b2fd6" [6cdeb42d-9ac9-4825-bcda-914b92bbc566] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-b2fd6" [6cdeb42d-9ac9-4825-bcda-914b92bbc566] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.002714916s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-921290 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-921290 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-921290 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    

Test skip (30/332)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.43s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-162622 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-162622" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-162622
--- SKIP: TestDownloadOnlyKic (0.43s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:35: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-578432" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-578432
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-921290 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-921290

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-921290

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-921290

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-921290

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-921290

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-921290

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-921290

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-921290

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-921290

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-921290

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-921290"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-921290"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-921290"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-921290

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-921290"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-921290"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-921290" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-921290" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-921290" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-921290" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-921290" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-921290" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-921290" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-921290" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-921290"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-921290"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-921290"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-921290"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-921290"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-921290" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-921290" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-921290" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-921290"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-921290"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-921290"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-921290"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-921290"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21830-2847530/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 01 Nov 2025 11:26:23 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-847244
contexts:
- context:
cluster: kubernetes-upgrade-847244
extensions:
- extension:
last-update: Sat, 01 Nov 2025 11:26:23 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-847244
name: kubernetes-upgrade-847244
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-847244
user:
client-certificate: /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/kubernetes-upgrade-847244/client.crt
client-key: /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/kubernetes-upgrade-847244/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-921290

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-921290"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-921290"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-921290"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-921290"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-921290"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-921290"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-921290"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-921290"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-921290"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-921290"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-921290"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-921290"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-921290"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-921290"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-921290"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-921290"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-921290"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-921290"

                                                
                                                
----------------------- debugLogs end: kubenet-921290 [took: 3.285185082s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-921290" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-921290
--- SKIP: TestNetworkPlugins/group/kubenet (3.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-921290 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-921290

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-921290

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-921290

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-921290

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-921290

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-921290

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-921290

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-921290

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-921290

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-921290

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921290"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921290"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921290"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-921290

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921290"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921290"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-921290" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-921290" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-921290" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-921290" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-921290" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-921290" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-921290" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-921290" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921290"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921290"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921290"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921290"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921290"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-921290

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-921290

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-921290" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-921290" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-921290

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-921290

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-921290" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-921290" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-921290" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-921290" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-921290" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921290"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921290"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921290"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921290"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921290"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21830-2847530/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 01 Nov 2025 11:26:23 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-847244
contexts:
- context:
cluster: kubernetes-upgrade-847244
extensions:
- extension:
last-update: Sat, 01 Nov 2025 11:26:23 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-847244
name: kubernetes-upgrade-847244
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-847244
user:
client-certificate: /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/kubernetes-upgrade-847244/client.crt
client-key: /home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/kubernetes-upgrade-847244/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-921290

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921290"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921290"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921290"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921290"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921290"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921290"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921290"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921290"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921290"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921290"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921290"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921290"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921290"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921290"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921290"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921290"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921290"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-921290" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-921290"

                                                
                                                
----------------------- debugLogs end: cilium-921290 [took: 3.750931773s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-921290" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-921290
--- SKIP: TestNetworkPlugins/group/cilium (3.90s)

                                                
                                    
Copied to clipboard