Test Report: Docker_Linux_containerd 21683

                    
                      1b58c48826b6fb4d6f7297e87780eae465bc5f37:2025-10-19:41984
                    
                

Test fail (2/332)

Order failed test Duration
91 TestFunctional/parallel/DashboardCmd 302.16
256 TestKubernetesUpgrade 589.3
x
+
TestFunctional/parallel/DashboardCmd (302.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-761710 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-761710 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-761710 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-761710 --alsologtostderr -v=1] stderr:
I1019 16:29:23.311919   53008 out.go:360] Setting OutFile to fd 1 ...
I1019 16:29:23.312203   53008 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1019 16:29:23.312216   53008 out.go:374] Setting ErrFile to fd 2...
I1019 16:29:23.312222   53008 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1019 16:29:23.312462   53008 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-3708/.minikube/bin
I1019 16:29:23.312711   53008 mustload.go:66] Loading cluster: functional-761710
I1019 16:29:23.313041   53008 config.go:182] Loaded profile config "functional-761710": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1019 16:29:23.313447   53008 cli_runner.go:164] Run: docker container inspect functional-761710 --format={{.State.Status}}
I1019 16:29:23.332026   53008 host.go:66] Checking if "functional-761710" exists ...
I1019 16:29:23.332388   53008 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1019 16:29:23.391563   53008 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-10-19 16:29:23.381679912 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1019 16:29:23.391680   53008 api_server.go:166] Checking apiserver status ...
I1019 16:29:23.391722   53008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1019 16:29:23.391762   53008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-761710
I1019 16:29:23.410216   53008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-3708/.minikube/machines/functional-761710/id_rsa Username:docker}
I1019 16:29:23.516188   53008 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5104/cgroup
W1019 16:29:23.526030   53008 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5104/cgroup: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1019 16:29:23.526089   53008 ssh_runner.go:195] Run: ls
I1019 16:29:23.530237   53008 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
I1019 16:29:23.535352   53008 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
ok
W1019 16:29:23.535400   53008 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I1019 16:29:23.535581   53008 config.go:182] Loaded profile config "functional-761710": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1019 16:29:23.535595   53008 addons.go:70] Setting dashboard=true in profile "functional-761710"
I1019 16:29:23.535603   53008 addons.go:239] Setting addon dashboard=true in "functional-761710"
I1019 16:29:23.535635   53008 host.go:66] Checking if "functional-761710" exists ...
I1019 16:29:23.536189   53008 cli_runner.go:164] Run: docker container inspect functional-761710 --format={{.State.Status}}
I1019 16:29:23.558162   53008 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I1019 16:29:23.559503   53008 out.go:179]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I1019 16:29:23.560814   53008 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
I1019 16:29:23.560835   53008 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I1019 16:29:23.560910   53008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-761710
I1019 16:29:23.583234   53008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-3708/.minikube/machines/functional-761710/id_rsa Username:docker}
I1019 16:29:23.688166   53008 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I1019 16:29:23.688194   53008 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I1019 16:29:23.701768   53008 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I1019 16:29:23.701794   53008 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I1019 16:29:23.714692   53008 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I1019 16:29:23.714720   53008 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I1019 16:29:23.728751   53008 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
I1019 16:29:23.728773   53008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I1019 16:29:23.742180   53008 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
I1019 16:29:23.742221   53008 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I1019 16:29:23.755321   53008 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I1019 16:29:23.755354   53008 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I1019 16:29:23.767832   53008 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
I1019 16:29:23.767854   53008 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I1019 16:29:23.780495   53008 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
I1019 16:29:23.780521   53008 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I1019 16:29:23.793002   53008 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
I1019 16:29:23.793023   53008 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I1019 16:29:23.806408   53008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1019 16:29:24.275772   53008 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-761710 addons enable metrics-server

                                                
                                                
I1019 16:29:24.277180   53008 addons.go:202] Writing out "functional-761710" config to set dashboard=true...
W1019 16:29:24.277385   53008 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I1019 16:29:24.278078   53008 kapi.go:59] client config for functional-761710: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-3708/.minikube/profiles/functional-761710/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-3708/.minikube/profiles/functional-761710/client.key", CAFile:"/home/jenkins/minikube-integration/21683-3708/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819d20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1019 16:29:24.278601   53008 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1019 16:29:24.278623   53008 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1019 16:29:24.278629   53008 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1019 16:29:24.278635   53008 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1019 16:29:24.278644   53008 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1019 16:29:24.286985   53008 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  b92c07ce-030a-44b1-8a04-35efeab7c5ec 738 0 2025-10-19 16:29:24 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-10-19 16:29:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.105.88.97,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.105.88.97],IPFamilies:[IPv4],AllocateLoadBalancerN
odePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W1019 16:29:24.287159   53008 out.go:285] * Launching proxy ...
* Launching proxy ...
I1019 16:29:24.287224   53008 dashboard.go:154] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-761710 proxy --port 36195]
I1019 16:29:24.287515   53008 dashboard.go:159] Waiting for kubectl to output host:port ...
I1019 16:29:24.337241   53008 dashboard.go:177] proxy stdout: Starting to serve on 127.0.0.1:36195
W1019 16:29:24.337320   53008 out.go:285] * Verifying proxy health ...
* Verifying proxy health ...
I1019 16:29:24.346822   53008 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[71d05a66-2e37-4bb7-b8dd-1d4abfb1d156] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:24 GMT]] Body:0xc000c7a980 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000206280 TLS:<nil>}
I1019 16:29:24.346903   53008 retry.go:31] will retry after 124.601µs: Temporary Error: unexpected response code: 503
I1019 16:29:24.350558   53008 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1d455a3a-fc20-4064-8b73-519b6e29cde5] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:24 GMT]] Body:0xc000c7aa40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002063c0 TLS:<nil>}
I1019 16:29:24.350615   53008 retry.go:31] will retry after 216.393µs: Temporary Error: unexpected response code: 503
I1019 16:29:24.356277   53008 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4d1c3f41-e43e-48c7-b209-cd738ade4d50] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:24 GMT]] Body:0xc000ce9c40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000206640 TLS:<nil>}
I1019 16:29:24.356321   53008 retry.go:31] will retry after 301.953µs: Temporary Error: unexpected response code: 503
I1019 16:29:24.359708   53008 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7ed7fdc7-8e6f-49c4-b561-1d9809e281ac] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:24 GMT]] Body:0xc000ce9d80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000317180 TLS:<nil>}
I1019 16:29:24.359763   53008 retry.go:31] will retry after 357.191µs: Temporary Error: unexpected response code: 503
I1019 16:29:24.363243   53008 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7c7b80a7-433c-4b5e-9584-33a8ecaec207] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:24 GMT]] Body:0xc000c7ab40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003172c0 TLS:<nil>}
I1019 16:29:24.363299   53008 retry.go:31] will retry after 597.975µs: Temporary Error: unexpected response code: 503
I1019 16:29:24.366911   53008 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[41ab07bf-2eca-4260-ae84-7bccc89c444a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:24 GMT]] Body:0xc000ce9e80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000206780 TLS:<nil>}
I1019 16:29:24.366957   53008 retry.go:31] will retry after 1.039229ms: Temporary Error: unexpected response code: 503
I1019 16:29:24.370288   53008 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[59bb9d39-67ee-4e8a-a3fc-c12a4d61f5c0] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:24 GMT]] Body:0xc000c7ac40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000317400 TLS:<nil>}
I1019 16:29:24.370374   53008 retry.go:31] will retry after 1.110691ms: Temporary Error: unexpected response code: 503
I1019 16:29:24.375233   53008 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[acace3e2-5b11-497b-8943-3457112dadd6] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:24 GMT]] Body:0xc0015fa180 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002068c0 TLS:<nil>}
I1019 16:29:24.375282   53008 retry.go:31] will retry after 1.629066ms: Temporary Error: unexpected response code: 503
I1019 16:29:24.379590   53008 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[752c84f4-adca-4c8c-b088-379ff1e45469] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:24 GMT]] Body:0xc000c7ad00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0015fc000 TLS:<nil>}
I1019 16:29:24.379644   53008 retry.go:31] will retry after 1.968417ms: Temporary Error: unexpected response code: 503
I1019 16:29:24.384412   53008 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0be8d5e9-2a60-4c78-8c98-b4ef93e0fed4] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:24 GMT]] Body:0xc0015fa2c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000206dc0 TLS:<nil>}
I1019 16:29:24.384459   53008 retry.go:31] will retry after 3.21246ms: Temporary Error: unexpected response code: 503
I1019 16:29:24.390812   53008 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[eea1d252-01b3-4884-b760-34ba6d14cde8] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:24 GMT]] Body:0xc00175e040 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0015fc140 TLS:<nil>}
I1019 16:29:24.390858   53008 retry.go:31] will retry after 3.698122ms: Temporary Error: unexpected response code: 503
I1019 16:29:24.397477   53008 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[24077880-0fd2-4fc3-9660-8c8de46c0b5f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:24 GMT]] Body:0xc0015fa3c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000317540 TLS:<nil>}
I1019 16:29:24.397526   53008 retry.go:31] will retry after 4.928639ms: Temporary Error: unexpected response code: 503
I1019 16:29:24.404902   53008 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[62b75097-b006-4967-bb08-e3585a104a75] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:24 GMT]] Body:0xc000c7ae00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0015fc280 TLS:<nil>}
I1019 16:29:24.404953   53008 retry.go:31] will retry after 11.274983ms: Temporary Error: unexpected response code: 503
I1019 16:29:24.419022   53008 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6d7c8c8d-4160-40cb-af51-37a0f79dafff] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:24 GMT]] Body:0xc0015fa4c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000206f00 TLS:<nil>}
I1019 16:29:24.419096   53008 retry.go:31] will retry after 28.550556ms: Temporary Error: unexpected response code: 503
I1019 16:29:24.452621   53008 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3e4dd4ae-fd6c-468b-a97b-78a7c75531a4] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:24 GMT]] Body:0xc000c7af00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0015fc3c0 TLS:<nil>}
I1019 16:29:24.452675   53008 retry.go:31] will retry after 38.345769ms: Temporary Error: unexpected response code: 503
I1019 16:29:24.494668   53008 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[09975259-8a72-40c2-9994-48a57c947fdd] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:24 GMT]] Body:0xc0015fa5c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207040 TLS:<nil>}
I1019 16:29:24.494737   53008 retry.go:31] will retry after 53.318715ms: Temporary Error: unexpected response code: 503
I1019 16:29:24.551710   53008 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[493ab912-05bb-4326-955c-109a4c067f99] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:24 GMT]] Body:0xc000c7b000 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0015fc500 TLS:<nil>}
I1019 16:29:24.551811   53008 retry.go:31] will retry after 60.453558ms: Temporary Error: unexpected response code: 503
I1019 16:29:24.616106   53008 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[048fe5d4-23c3-45de-bdaf-10640fd8e2d6] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:24 GMT]] Body:0xc0015fa6c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207180 TLS:<nil>}
I1019 16:29:24.616178   53008 retry.go:31] will retry after 86.53404ms: Temporary Error: unexpected response code: 503
I1019 16:29:24.706040   53008 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1736325f-6788-45c0-b976-cff92c0a55f8] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:24 GMT]] Body:0xc000c7b100 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0015fc640 TLS:<nil>}
I1019 16:29:24.706123   53008 retry.go:31] will retry after 128.275809ms: Temporary Error: unexpected response code: 503
I1019 16:29:24.837956   53008 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[abfd279e-3bba-441a-adbe-bf3480fd4de0] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:24 GMT]] Body:0xc0015fa7c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002072c0 TLS:<nil>}
I1019 16:29:24.838029   53008 retry.go:31] will retry after 216.827935ms: Temporary Error: unexpected response code: 503
I1019 16:29:25.058316   53008 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[222376b3-c476-4950-9727-dee579917bf1] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:25 GMT]] Body:0xc00175e140 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0015fc780 TLS:<nil>}
I1019 16:29:25.058380   53008 retry.go:31] will retry after 328.502909ms: Temporary Error: unexpected response code: 503
I1019 16:29:25.390844   53008 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[dd5ff30f-4a58-4ef4-92c5-464a07c3c875] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:25 GMT]] Body:0xc000c7b200 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000317680 TLS:<nil>}
I1019 16:29:25.390899   53008 retry.go:31] will retry after 315.416144ms: Temporary Error: unexpected response code: 503
I1019 16:29:25.710501   53008 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[29df82db-fe4a-45cc-8311-4aa0d387503a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:25 GMT]] Body:0xc0015fa900 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207400 TLS:<nil>}
I1019 16:29:25.710570   53008 retry.go:31] will retry after 571.57826ms: Temporary Error: unexpected response code: 503
I1019 16:29:26.286204   53008 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[73695916-e011-4610-bf9a-89abf3c95437] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:26 GMT]] Body:0xc00175e200 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0015fc8c0 TLS:<nil>}
I1019 16:29:26.286261   53008 retry.go:31] will retry after 1.546802408s: Temporary Error: unexpected response code: 503
I1019 16:29:27.836557   53008 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[79af8d7f-f2a1-4c2b-983c-de13a19e66fe] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:27 GMT]] Body:0xc000c7b300 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003177c0 TLS:<nil>}
I1019 16:29:27.836650   53008 retry.go:31] will retry after 2.194466349s: Temporary Error: unexpected response code: 503
I1019 16:29:30.034620   53008 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f2ed78fb-ff56-43c0-856a-611054b0c8dd] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:30 GMT]] Body:0xc00175e300 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207540 TLS:<nil>}
I1019 16:29:30.034674   53008 retry.go:31] will retry after 3.620479271s: Temporary Error: unexpected response code: 503
I1019 16:29:33.658946   53008 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[de89b6f8-4876-4818-8c33-d677864dab11] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:33 GMT]] Body:0xc00175e3c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000317900 TLS:<nil>}
I1019 16:29:33.659016   53008 retry.go:31] will retry after 3.791866852s: Temporary Error: unexpected response code: 503
I1019 16:29:37.456424   53008 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[62432db4-17c0-4dbb-b112-2a5090d7df7f] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:37 GMT]] Body:0xc000c7b4c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207680 TLS:<nil>}
I1019 16:29:37.456494   53008 retry.go:31] will retry after 8.456579226s: Temporary Error: unexpected response code: 503
I1019 16:29:45.917177   53008 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b5848962-095e-4b4f-a53e-2f4624fd5041] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:45 GMT]] Body:0xc00175e440 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207900 TLS:<nil>}
I1019 16:29:45.917243   53008 retry.go:31] will retry after 4.606413788s: Temporary Error: unexpected response code: 503
I1019 16:29:50.527988   53008 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a75d02fa-4e71-453e-8309-4f2c9f485e1a] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:50 GMT]] Body:0xc0015faa00 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000317a40 TLS:<nil>}
I1019 16:29:50.528078   53008 retry.go:31] will retry after 18.018986578s: Temporary Error: unexpected response code: 503
I1019 16:30:08.550756   53008 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[66505db9-b73d-4632-bfae-e7037909b1dc] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:30:08 GMT]] Body:0xc00175e540 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0015fca00 TLS:<nil>}
I1019 16:30:08.550817   53008 retry.go:31] will retry after 11.141916924s: Temporary Error: unexpected response code: 503
I1019 16:30:19.696401   53008 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3f3bcbe4-7b4e-4b5d-893d-f6341696a1c5] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:30:19 GMT]] Body:0xc0015fab00 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207cc0 TLS:<nil>}
I1019 16:30:19.696470   53008 retry.go:31] will retry after 35.35287487s: Temporary Error: unexpected response code: 503
I1019 16:30:55.052692   53008 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0d16d20f-d7fb-4bb2-9c69-2b72e3b4a1fb] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:30:55 GMT]] Body:0xc000c7b640 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000317b80 TLS:<nil>}
I1019 16:30:55.052766   53008 retry.go:31] will retry after 26.807980584s: Temporary Error: unexpected response code: 503
I1019 16:31:21.867231   53008 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1a050042-1f2c-4fc7-8e73-adc0c676ac2f] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:31:21 GMT]] Body:0xc0015fab80 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00182e000 TLS:<nil>}
I1019 16:31:21.867324   53008 retry.go:31] will retry after 35.226434368s: Temporary Error: unexpected response code: 503
I1019 16:31:57.099119   53008 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[69471167-6b04-4290-9adf-f6ead5c0f67c] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:31:57 GMT]] Body:0xc00175e040 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00182e140 TLS:<nil>}
I1019 16:31:57.099202   53008 retry.go:31] will retry after 58.535539436s: Temporary Error: unexpected response code: 503
I1019 16:32:55.638541   53008 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[06ffcdab-4d26-44dc-87d1-75bfbe0acabd] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:32:55 GMT]] Body:0xc00083a0c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000718280 TLS:<nil>}
I1019 16:32:55.638619   53008 retry.go:31] will retry after 43.048478163s: Temporary Error: unexpected response code: 503
I1019 16:33:38.691388   53008 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c8b0dbae-47e7-4d33-bd96-9be621cac567] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:33:38 GMT]] Body:0xc00083a0c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0007183c0 TLS:<nil>}
I1019 16:33:38.691519   53008 retry.go:31] will retry after 32.789020471s: Temporary Error: unexpected response code: 503
I1019 16:34:11.486102   53008 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4c71d796-bbd5-42b4-84fa-a3e85c903835] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:34:11 GMT]] Body:0xc000c7a180 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000718a00 TLS:<nil>}
I1019 16:34:11.486183   53008 retry.go:31] will retry after 1m15.447738855s: Temporary Error: unexpected response code: 503
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-761710
helpers_test.go:243: (dbg) docker inspect functional-761710:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2b71f09d1a45844908d31cf333a40b01e94292aff4b985dd75ace2895a23ae06",
	        "Created": "2025-10-19T16:27:26.539472688Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 40398,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T16:27:26.572261738Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/2b71f09d1a45844908d31cf333a40b01e94292aff4b985dd75ace2895a23ae06/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2b71f09d1a45844908d31cf333a40b01e94292aff4b985dd75ace2895a23ae06/hostname",
	        "HostsPath": "/var/lib/docker/containers/2b71f09d1a45844908d31cf333a40b01e94292aff4b985dd75ace2895a23ae06/hosts",
	        "LogPath": "/var/lib/docker/containers/2b71f09d1a45844908d31cf333a40b01e94292aff4b985dd75ace2895a23ae06/2b71f09d1a45844908d31cf333a40b01e94292aff4b985dd75ace2895a23ae06-json.log",
	        "Name": "/functional-761710",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-761710:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-761710",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2b71f09d1a45844908d31cf333a40b01e94292aff4b985dd75ace2895a23ae06",
	                "LowerDir": "/var/lib/docker/overlay2/b581ee1a1a38fd53e589742cc7f8edeb245f8ad3ab646d27d505915144d66825-init/diff:/var/lib/docker/overlay2/679788dc5d6c9ac02347cc41d6b5035c8cb9d202024310ee3487f11ae7ab51e7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b581ee1a1a38fd53e589742cc7f8edeb245f8ad3ab646d27d505915144d66825/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b581ee1a1a38fd53e589742cc7f8edeb245f8ad3ab646d27d505915144d66825/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b581ee1a1a38fd53e589742cc7f8edeb245f8ad3ab646d27d505915144d66825/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-761710",
	                "Source": "/var/lib/docker/volumes/functional-761710/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-761710",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-761710",
	                "name.minikube.sigs.k8s.io": "functional-761710",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a457919bd01d830b11d02904d6fe8de312217e4919369ee669c20e6baa2ba71b",
	            "SandboxKey": "/var/run/docker/netns/a457919bd01d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-761710": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1a:39:4f:f2:39:04",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2dcae6e32a0f28d53c1d5609c5e0a2ce3b8ab39e083e1023c0a46d3a121e7012",
	                    "EndpointID": "ea854562658c4c75f61db0f332c5de8cfb4e2ba638e8f3fd23b74a9fec2436e3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-761710",
	                        "2b71f09d1a45"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-761710 -n functional-761710
helpers_test.go:252: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-761710 logs -n 25: (1.227030033s)
helpers_test.go:260: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                           ARGS                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-761710 ssh findmnt -T /mount3                                                                                  │ functional-761710 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ mount          │ -p functional-761710 --kill=true                                                                                          │ functional-761710 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │                     │
	│ ssh            │ functional-761710 ssh sudo cat /etc/test/nested/copy/7254/hosts                                                           │ functional-761710 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ cp             │ functional-761710 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                        │ functional-761710 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ ssh            │ functional-761710 ssh -n functional-761710 sudo cat /home/docker/cp-test.txt                                              │ functional-761710 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ cp             │ functional-761710 cp functional-761710:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd124758359/001/cp-test.txt │ functional-761710 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ ssh            │ functional-761710 ssh -n functional-761710 sudo cat /home/docker/cp-test.txt                                              │ functional-761710 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ cp             │ functional-761710 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                 │ functional-761710 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ ssh            │ functional-761710 ssh -n functional-761710 sudo cat /tmp/does/not/exist/cp-test.txt                                       │ functional-761710 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ ssh            │ functional-761710 ssh sudo cat /etc/ssl/certs/7254.pem                                                                    │ functional-761710 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ ssh            │ functional-761710 ssh sudo cat /usr/share/ca-certificates/7254.pem                                                        │ functional-761710 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ ssh            │ functional-761710 ssh sudo cat /etc/ssl/certs/51391683.0                                                                  │ functional-761710 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ ssh            │ functional-761710 ssh sudo cat /etc/ssl/certs/72542.pem                                                                   │ functional-761710 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ ssh            │ functional-761710 ssh sudo cat /usr/share/ca-certificates/72542.pem                                                       │ functional-761710 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ ssh            │ functional-761710 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                  │ functional-761710 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ image          │ functional-761710 image ls --format short --alsologtostderr                                                               │ functional-761710 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ image          │ functional-761710 image ls --format yaml --alsologtostderr                                                                │ functional-761710 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ ssh            │ functional-761710 ssh pgrep buildkitd                                                                                     │ functional-761710 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │                     │
	│ image          │ functional-761710 image ls --format json --alsologtostderr                                                                │ functional-761710 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ image          │ functional-761710 image ls --format table --alsologtostderr                                                               │ functional-761710 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ image          │ functional-761710 image build -t localhost/my-image:functional-761710 testdata/build --alsologtostderr                    │ functional-761710 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ update-context │ functional-761710 update-context --alsologtostderr -v=2                                                                   │ functional-761710 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ update-context │ functional-761710 update-context --alsologtostderr -v=2                                                                   │ functional-761710 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ update-context │ functional-761710 update-context --alsologtostderr -v=2                                                                   │ functional-761710 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	│ image          │ functional-761710 image ls                                                                                                │ functional-761710 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 16:29:23
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 16:29:23.061173   52761 out.go:360] Setting OutFile to fd 1 ...
	I1019 16:29:23.061415   52761 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:29:23.061424   52761 out.go:374] Setting ErrFile to fd 2...
	I1019 16:29:23.061428   52761 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:29:23.061662   52761 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-3708/.minikube/bin
	I1019 16:29:23.062133   52761 out.go:368] Setting JSON to false
	I1019 16:29:23.063238   52761 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":705,"bootTime":1760890658,"procs":264,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1019 16:29:23.063316   52761 start.go:143] virtualization: kvm guest
	I1019 16:29:23.065319   52761 out.go:179] * [functional-761710] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1019 16:29:23.066783   52761 notify.go:221] Checking for updates...
	I1019 16:29:23.066794   52761 out.go:179]   - MINIKUBE_LOCATION=21683
	I1019 16:29:23.068249   52761 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 16:29:23.069973   52761 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-3708/kubeconfig
	I1019 16:29:23.071299   52761 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-3708/.minikube
	I1019 16:29:23.072799   52761 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1019 16:29:23.074201   52761 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 16:29:23.076001   52761 config.go:182] Loaded profile config "functional-761710": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1019 16:29:23.076536   52761 driver.go:422] Setting default libvirt URI to qemu:///system
	I1019 16:29:23.104147   52761 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1019 16:29:23.104327   52761 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 16:29:23.182350   52761 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:false NGoroutines:58 SystemTime:2025-10-19 16:29:23.169590828 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 16:29:23.182539   52761 docker.go:319] overlay module found
	I1019 16:29:23.184086   52761 out.go:179] * Using the docker driver based on existing profile
	I1019 16:29:23.185550   52761 start.go:309] selected driver: docker
	I1019 16:29:23.185570   52761 start.go:930] validating driver "docker" against &{Name:functional-761710 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-761710 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 16:29:23.185696   52761 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 16:29:23.185803   52761 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 16:29:23.253848   52761 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:false NGoroutines:58 SystemTime:2025-10-19 16:29:23.24165136 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 16:29:23.254460   52761 cni.go:84] Creating CNI manager for ""
	I1019 16:29:23.254515   52761 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1019 16:29:23.254562   52761 start.go:353] cluster config:
	{Name:functional-761710 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-761710 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 16:29:23.256550   52761 out.go:179] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	2f5a7af658f76       07ccdb7838758       4 minutes ago       Running             myfrontend                0                   5db0711b569bb       sp-pod                                      default
	f1a44b7013fc5       56cc512116c8f       4 minutes ago       Exited              mount-munger              0                   5903cc5f08550       busybox-mount                               default
	01377ea863bb5       5e7abcdd20216       5 minutes ago       Running             nginx                     0                   bd4ac4e508b8c       nginx-svc                                   default
	950283f0ddd2a       9056ab77afb8e       5 minutes ago       Running             echo-server               0                   08d09ecc604fc       hello-node-75c85bcc94-5nt4t                 default
	bbbd1ab2885a0       9056ab77afb8e       5 minutes ago       Running             echo-server               0                   d943eac4fe226       hello-node-connect-7d85dfc575-4w6jk         default
	08973f3b34332       c3994bc696102       5 minutes ago       Running             kube-apiserver            0                   5f3327a24a4b5       kube-apiserver-functional-761710            kube-system
	1462b09dbe799       7dd6aaa1717ab       5 minutes ago       Running             kube-scheduler            1                   1b8149a8c3599       kube-scheduler-functional-761710            kube-system
	4973497bd1c42       c80c8dbafe7dd       5 minutes ago       Running             kube-controller-manager   1                   79c59ac0302e5       kube-controller-manager-functional-761710   kube-system
	e85bee2c96edb       5f1f5298c888d       5 minutes ago       Running             etcd                      1                   00da85ec4a331       etcd-functional-761710                      kube-system
	c968c41169b99       6e38f40d628db       6 minutes ago       Running             storage-provisioner       1                   5760eea995196       storage-provisioner                         kube-system
	f34e350aeb779       409467f978b4a       6 minutes ago       Running             kindnet-cni               1                   a88f299a1e3f7       kindnet-l9dts                               kube-system
	7c27f34b9f2fd       fc25172553d79       6 minutes ago       Running             kube-proxy                1                   de9f1d19b6030       kube-proxy-ffw5j                            kube-system
	a4020f1f6d2fc       52546a367cc9e       6 minutes ago       Running             coredns                   1                   f708417819904       coredns-66bc5c9577-mcw9m                    kube-system
	db0ccb1d66c53       52546a367cc9e       6 minutes ago       Exited              coredns                   0                   f708417819904       coredns-66bc5c9577-mcw9m                    kube-system
	0a88ac387625b       6e38f40d628db       6 minutes ago       Exited              storage-provisioner       0                   5760eea995196       storage-provisioner                         kube-system
	1cda6a7dc16b1       409467f978b4a       6 minutes ago       Exited              kindnet-cni               0                   a88f299a1e3f7       kindnet-l9dts                               kube-system
	7c91085af0ff8       fc25172553d79       6 minutes ago       Exited              kube-proxy                0                   de9f1d19b6030       kube-proxy-ffw5j                            kube-system
	1b3ed669750ce       5f1f5298c888d       6 minutes ago       Exited              etcd                      0                   00da85ec4a331       etcd-functional-761710                      kube-system
	c4a059002b214       c80c8dbafe7dd       6 minutes ago       Exited              kube-controller-manager   0                   79c59ac0302e5       kube-controller-manager-functional-761710   kube-system
	d7d06b0ece08f       7dd6aaa1717ab       6 minutes ago       Exited              kube-scheduler            0                   1b8149a8c3599       kube-scheduler-functional-761710            kube-system
	
	
	==> containerd <==
	Oct 19 16:31:01 functional-761710 containerd[3839]: time="2025-10-19T16:31:01.542421140Z" level=info msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	Oct 19 16:31:01 functional-761710 containerd[3839]: time="2025-10-19T16:31:01.544299691Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 19 16:31:02 functional-761710 containerd[3839]: time="2025-10-19T16:31:02.125829505Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 19 16:31:03 functional-761710 containerd[3839]: time="2025-10-19T16:31:03.782751402Z" level=error msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 19 16:31:03 functional-761710 containerd[3839]: time="2025-10-19T16:31:03.782845579Z" level=info msg="stop pulling image docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: active requests=0, bytes read=11047"
	Oct 19 16:31:03 functional-761710 containerd[3839]: time="2025-10-19T16:31:03.783574652Z" level=info msg="PullImage \"docker.io/mysql:5.7\""
	Oct 19 16:31:03 functional-761710 containerd[3839]: time="2025-10-19T16:31:03.785061005Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 19 16:31:04 functional-761710 containerd[3839]: time="2025-10-19T16:31:04.375355509Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 19 16:31:06 functional-761710 containerd[3839]: time="2025-10-19T16:31:06.019641333Z" level=error msg="PullImage \"docker.io/mysql:5.7\" failed" error="failed to pull and unpack image \"docker.io/library/mysql:5.7\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 19 16:31:06 functional-761710 containerd[3839]: time="2025-10-19T16:31:06.019679842Z" level=info msg="stop pulling image docker.io/library/mysql:5.7: active requests=0, bytes read=10967"
	Oct 19 16:32:25 functional-761710 containerd[3839]: time="2025-10-19T16:32:25.540993965Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Oct 19 16:32:25 functional-761710 containerd[3839]: time="2025-10-19T16:32:25.542798845Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 19 16:32:26 functional-761710 containerd[3839]: time="2025-10-19T16:32:26.134805521Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 19 16:32:28 functional-761710 containerd[3839]: time="2025-10-19T16:32:28.137459029Z" level=error msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 19 16:32:28 functional-761710 containerd[3839]: time="2025-10-19T16:32:28.137551428Z" level=info msg="stop pulling image docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: active requests=0, bytes read=12709"
	Oct 19 16:32:31 functional-761710 containerd[3839]: time="2025-10-19T16:32:31.541373580Z" level=info msg="PullImage \"docker.io/mysql:5.7\""
	Oct 19 16:32:31 functional-761710 containerd[3839]: time="2025-10-19T16:32:31.542952328Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 19 16:32:32 functional-761710 containerd[3839]: time="2025-10-19T16:32:32.149442280Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 19 16:32:33 functional-761710 containerd[3839]: time="2025-10-19T16:32:33.797489879Z" level=error msg="PullImage \"docker.io/mysql:5.7\" failed" error="failed to pull and unpack image \"docker.io/library/mysql:5.7\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 19 16:32:33 functional-761710 containerd[3839]: time="2025-10-19T16:32:33.797546775Z" level=info msg="stop pulling image docker.io/library/mysql:5.7: active requests=0, bytes read=10967"
	Oct 19 16:32:33 functional-761710 containerd[3839]: time="2025-10-19T16:32:33.798397856Z" level=info msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	Oct 19 16:32:33 functional-761710 containerd[3839]: time="2025-10-19T16:32:33.800009481Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 19 16:32:34 functional-761710 containerd[3839]: time="2025-10-19T16:32:34.386132541Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 19 16:32:36 functional-761710 containerd[3839]: time="2025-10-19T16:32:36.031371027Z" level=error msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 19 16:32:36 functional-761710 containerd[3839]: time="2025-10-19T16:32:36.031456486Z" level=info msg="stop pulling image docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: active requests=0, bytes read=11047"
	
	
	==> coredns [a4020f1f6d2fca615331c9843c7c8dc741b34fca8cbb3cce01f2ccad93bb295a] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54651 - 6144 "HINFO IN 3449147369874917196.2101408998229899954. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.156774922s
	
	
	==> coredns [db0ccb1d66c53de96180d5bd61b1a535ac4175f9b223e1d6d2c252489fd79e1a] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39851 - 7886 "HINFO IN 1345558034441184947.6162443656980074187. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.022754018s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-761710
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-761710
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e20882874ea1ac33516421f13ca0f7def6fb6b34
	                    minikube.k8s.io/name=functional-761710
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T16_27_41_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 16:27:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-761710
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 16:34:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 16:33:28 +0000   Sun, 19 Oct 2025 16:27:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 16:33:28 +0000   Sun, 19 Oct 2025 16:27:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 16:33:28 +0000   Sun, 19 Oct 2025 16:27:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Oct 2025 16:33:28 +0000   Sun, 19 Oct 2025 16:27:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-761710
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                5ae68d56-98af-4144-b34d-e9f4fe2ba653
	  Boot ID:                    6b9d3a6f-b4ab-4fcc-81f2-3c26fae1271b
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.28
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-5nt4t                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m15s
	  default                     hello-node-connect-7d85dfc575-4w6jk           0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m15s
	  default                     mysql-5bb876957f-f6zqq                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     4m56s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m15s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m57s
	  kube-system                 coredns-66bc5c9577-mcw9m                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     6m38s
	  kube-system                 etcd-functional-761710                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         6m44s
	  kube-system                 kindnet-l9dts                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      6m39s
	  kube-system                 kube-apiserver-functional-761710              250m (3%)     0 (0%)      0 (0%)           0 (0%)         5m40s
	  kube-system                 kube-controller-manager-functional-761710     200m (2%)     0 (0%)      0 (0%)           0 (0%)         6m44s
	  kube-system                 kube-proxy-ffw5j                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m39s
	  kube-system                 kube-scheduler-functional-761710              100m (1%)     0 (0%)      0 (0%)           0 (0%)         6m44s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m38s
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-5tf7v    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-7vhtt         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m37s                  kube-proxy       
	  Normal  Starting                 6m                     kube-proxy       
	  Normal  NodeAllocatableEnforced  6m44s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m44s                  kubelet          Node functional-761710 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m44s                  kubelet          Node functional-761710 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m44s                  kubelet          Node functional-761710 status is now: NodeHasSufficientPID
	  Normal  Starting                 6m44s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           6m40s                  node-controller  Node functional-761710 event: Registered Node functional-761710 in Controller
	  Normal  NodeReady                6m27s                  kubelet          Node functional-761710 status is now: NodeReady
	  Normal  Starting                 5m43s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m43s (x8 over 5m43s)  kubelet          Node functional-761710 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m43s (x8 over 5m43s)  kubelet          Node functional-761710 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m43s (x7 over 5m43s)  kubelet          Node functional-761710 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m43s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m38s                  node-controller  Node functional-761710 event: Registered Node functional-761710 in Controller
	
	
	==> dmesg <==
	[Oct19 16:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.000998] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.002002] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.086015] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.395420] i8042: Warning: Keylock active
	[  +0.009777] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004107] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000660] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000671] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000664] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000659] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000749] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000666] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000673] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000734] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.486149] block sda: the capability attribute has been deprecated.
	[  +0.085903] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.022480] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.261018] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [1b3ed669750ce399f206e2f340f450d87681263adb0c18cb6b8771fbe062e569] <==
	{"level":"warn","ts":"2025-10-19T16:27:37.400537Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:27:37.406675Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:27:37.413453Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:27:37.428017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:27:37.433969Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:27:37.439718Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:27:37.482868Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51828","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-19T16:28:22.817527Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-19T16:28:22.817593Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-761710","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-10-19T16:28:22.817686Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-19T16:28:29.819533Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-19T16:28:29.819634Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-19T16:28:29.819690Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"warn","ts":"2025-10-19T16:28:29.819710Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-19T16:28:29.819733Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-19T16:28:29.819731Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-19T16:28:29.819745Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-19T16:28:29.819743Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-19T16:28:29.819745Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-19T16:28:29.819766Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"error","ts":"2025-10-19T16:28:29.819754Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-19T16:28:29.821877Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-19T16:28:29.821934Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-19T16:28:29.821969Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-19T16:28:29.821983Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-761710","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [e85bee2c96edbced28d97ebccf45eba0479dc4e989d2c13c4223e8cff0e1383c] <==
	{"level":"warn","ts":"2025-10-19T16:28:42.971392Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:42.978095Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:42.989711Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:42.995697Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:43.003066Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:43.009203Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:43.015096Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:43.022319Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:43.028492Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:43.035353Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:43.042513Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:43.049036Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:43.063812Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:43.070962Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:43.077216Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:43.084149Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:43.090676Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:43.097239Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:43.103665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:43.110695Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:43.116652Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:43.127526Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:43.133648Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:43.139968Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T16:28:43.188849Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39190","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 16:34:24 up 16 min,  0 user,  load average: 0.18, 0.40, 0.42
	Linux functional-761710 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1cda6a7dc16b1ab1780421ca338150ff0caab7ffbf29c2e3eac63886eaea63b5] <==
	I1019 16:27:46.938998       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1019 16:27:47.023001       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1019 16:27:47.023197       1 main.go:148] setting mtu 1500 for CNI 
	I1019 16:27:47.023213       1 main.go:178] kindnetd IP family: "ipv4"
	I1019 16:27:47.023235       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-19T16:27:47Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1019 16:27:47.234099       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1019 16:27:47.234191       1 controller.go:381] "Waiting for informer caches to sync"
	I1019 16:27:47.234214       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1019 16:27:47.234493       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1019 16:27:47.534453       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1019 16:27:47.534483       1 metrics.go:72] Registering metrics
	I1019 16:27:47.534521       1 controller.go:711] "Syncing nftables rules"
	I1019 16:27:57.226737       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:27:57.226826       1 main.go:301] handling current node
	I1019 16:28:07.232918       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:28:07.232952       1 main.go:301] handling current node
	I1019 16:28:17.229134       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:28:17.229185       1 main.go:301] handling current node
	
	
	==> kindnet [f34e350aeb77902eca5e271db9df5c60cd64a19bb27366157050e71391704a75] <==
	I1019 16:32:24.070936       1 main.go:301] handling current node
	I1019 16:32:34.077024       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:32:34.077083       1 main.go:301] handling current node
	I1019 16:32:44.070140       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:32:44.070190       1 main.go:301] handling current node
	I1019 16:32:54.071489       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:32:54.071527       1 main.go:301] handling current node
	I1019 16:33:04.069666       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:33:04.069707       1 main.go:301] handling current node
	I1019 16:33:14.070193       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:33:14.070231       1 main.go:301] handling current node
	I1019 16:33:24.071902       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:33:24.071932       1 main.go:301] handling current node
	I1019 16:33:34.073651       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:33:34.073690       1 main.go:301] handling current node
	I1019 16:33:44.070113       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:33:44.070148       1 main.go:301] handling current node
	I1019 16:33:54.072337       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:33:54.072369       1 main.go:301] handling current node
	I1019 16:34:04.073108       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:34:04.073143       1 main.go:301] handling current node
	I1019 16:34:14.070179       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:34:14.070222       1 main.go:301] handling current node
	I1019 16:34:24.071177       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 16:34:24.071212       1 main.go:301] handling current node
	
	
	==> kube-apiserver [08973f3b34332566862f52063d35ff4b6a30b82b42d4a971953fe02a3bcf1241] <==
	I1019 16:28:43.633635       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1019 16:28:43.633642       1 cache.go:39] Caches are synced for autoregister controller
	I1019 16:28:43.637782       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1019 16:28:43.663575       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1019 16:28:44.535846       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1019 16:28:44.671230       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1019 16:28:44.671230       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	W1019 16:28:44.840440       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1019 16:28:44.841660       1 controller.go:667] quota admission added evaluator for: endpoints
	I1019 16:28:44.846194       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1019 16:28:45.393664       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1019 16:28:45.487184       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1019 16:28:45.534802       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1019 16:28:45.540187       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1019 16:28:47.927207       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1019 16:29:04.625735       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.102.69.252"}
	I1019 16:29:09.382602       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.108.29.20"}
	I1019 16:29:09.443972       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.109.192.131"}
	I1019 16:29:09.455438       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.103.224.244"}
	I1019 16:29:24.142868       1 controller.go:667] quota admission added evaluator for: namespaces
	I1019 16:29:24.255363       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.88.97"}
	I1019 16:29:24.268479       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.219.99"}
	E1019 16:29:26.569540       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:39088: use of closed network connection
	I1019 16:29:28.306776       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.105.126.77"}
	E1019 16:29:36.968094       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:49670: use of closed network connection
	
	
	==> kube-controller-manager [4973497bd1c429e784b628f248d777abe37538657f8c8e3fbcc1aa1f963be117] <==
	I1019 16:28:46.963225       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1019 16:28:46.963360       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1019 16:28:46.964377       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1019 16:28:46.964406       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1019 16:28:46.965145       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1019 16:28:46.965215       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1019 16:28:46.967650       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1019 16:28:46.969815       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1019 16:28:46.970164       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 16:28:46.971240       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1019 16:28:46.973484       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1019 16:28:46.974703       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1019 16:28:46.976896       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1019 16:28:46.979370       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1019 16:28:46.979449       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1019 16:28:46.979533       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-761710"
	I1019 16:28:46.979602       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1019 16:28:46.981556       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1019 16:28:46.988910       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1019 16:29:24.190944       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1019 16:29:24.195610       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1019 16:29:24.197978       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1019 16:29:24.199685       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1019 16:29:24.204090       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1019 16:29:24.207392       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [c4a059002b214bc47299729cff5112eae776a7586a794ec2aa1f122731cd2ccc] <==
	I1019 16:27:44.878349       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1019 16:27:44.878446       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1019 16:27:44.878473       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1019 16:27:44.878496       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1019 16:27:44.878548       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1019 16:27:44.878693       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1019 16:27:44.878743       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1019 16:27:44.878911       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1019 16:27:44.878923       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1019 16:27:44.879312       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1019 16:27:44.879344       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1019 16:27:44.879385       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1019 16:27:44.879393       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1019 16:27:44.882030       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1019 16:27:44.883652       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 16:27:44.883773       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 16:27:44.883807       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 16:27:44.883816       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1019 16:27:44.883823       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1019 16:27:44.888799       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1019 16:27:44.895191       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1019 16:27:44.898493       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 16:27:44.903639       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1019 16:27:44.912959       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1019 16:27:59.880398       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [7c27f34b9f2fd5e95e689810fa8cf4a8aad2065d1b8cba2921e43bcecfd12157] <==
	I1019 16:28:23.733826       1 server_linux.go:53] "Using iptables proxy"
	I1019 16:28:23.795848       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1019 16:28:23.896002       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 16:28:23.896075       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1019 16:28:23.896151       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 16:28:23.917653       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 16:28:23.917700       1 server_linux.go:132] "Using iptables Proxier"
	I1019 16:28:23.923168       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 16:28:23.923898       1 server.go:527] "Version info" version="v1.34.1"
	I1019 16:28:23.923932       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 16:28:23.925726       1 config.go:200] "Starting service config controller"
	I1019 16:28:23.925747       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 16:28:23.925787       1 config.go:106] "Starting endpoint slice config controller"
	I1019 16:28:23.925797       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 16:28:23.925801       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 16:28:23.925808       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 16:28:23.925819       1 config.go:309] "Starting node config controller"
	I1019 16:28:23.925825       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 16:28:23.925831       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 16:28:24.026703       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1019 16:28:24.026831       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1019 16:28:24.026891       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [7c91085af0ff846a5f5570f5cd5939b894aec16fc914b191677051465657777a] <==
	I1019 16:27:46.486097       1 server_linux.go:53] "Using iptables proxy"
	I1019 16:27:46.544808       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1019 16:27:46.645849       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 16:27:46.645904       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1019 16:27:46.646005       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 16:27:46.669329       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 16:27:46.669389       1 server_linux.go:132] "Using iptables Proxier"
	I1019 16:27:46.675140       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 16:27:46.675732       1 server.go:527] "Version info" version="v1.34.1"
	I1019 16:27:46.675769       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 16:27:46.677658       1 config.go:200] "Starting service config controller"
	I1019 16:27:46.677676       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 16:27:46.677736       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 16:27:46.677749       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 16:27:46.677771       1 config.go:106] "Starting endpoint slice config controller"
	I1019 16:27:46.677778       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 16:27:46.678690       1 config.go:309] "Starting node config controller"
	I1019 16:27:46.678775       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 16:27:46.678802       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 16:27:46.777849       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1019 16:27:46.777894       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1019 16:27:46.777895       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [1462b09dbe799076213d74cc43d920ba9338c4f6897c8e0f1cee85968c4316ca] <==
	I1019 16:28:42.304977       1 serving.go:386] Generated self-signed cert in-memory
	W1019 16:28:43.569376       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1019 16:28:43.569414       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found, role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found]
	W1019 16:28:43.569428       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1019 16:28:43.569437       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1019 16:28:43.583920       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1019 16:28:43.583943       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 16:28:43.585791       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 16:28:43.585829       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 16:28:43.586154       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1019 16:28:43.586213       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1019 16:28:43.686218       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [d7d06b0ece08fd1d83ee5dcb0407dd361d8fe3062ee13eae257debf5ee09797a] <==
	E1019 16:27:37.889462       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1019 16:27:37.889469       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1019 16:27:37.889491       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1019 16:27:37.889534       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1019 16:27:37.889177       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1019 16:27:37.889653       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1019 16:27:37.889756       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1019 16:27:37.889833       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1019 16:27:38.706446       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1019 16:27:38.713829       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1019 16:27:38.748178       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1019 16:27:38.857638       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1019 16:27:38.859585       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1019 16:27:38.889282       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1019 16:27:38.939734       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1019 16:27:39.104564       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1019 16:27:39.121875       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1019 16:27:39.152953       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I1019 16:27:41.886024       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 16:28:40.017075       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1019 16:28:40.017093       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 16:28:40.017220       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1019 16:28:40.017248       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1019 16:28:40.017281       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1019 16:28:40.017307       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 19 16:32:41 functional-761710 kubelet[4897]: E1019 16:32:41.544395    4897 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-7vhtt" podUID="5e59f625-102a-4bda-ab22-74f294540da8"
	Oct 19 16:32:47 functional-761710 kubelet[4897]: E1019 16:32:47.543835    4897 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-f6zqq" podUID="a065c342-53de-4ec8-ad51-73316b849dc7"
	Oct 19 16:32:48 functional-761710 kubelet[4897]: E1019 16:32:48.540303    4897 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-5tf7v" podUID="20c2626e-8f7b-4765-afb8-e58d
1b547085"
	Oct 19 16:32:54 functional-761710 kubelet[4897]: E1019 16:32:54.540713    4897 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-7vhtt" podUID="5e59f625-102a-4bda-ab22-74f294540da8"
	Oct 19 16:32:58 functional-761710 kubelet[4897]: E1019 16:32:58.540742    4897 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-f6zqq" podUID="a065c342-53de-4ec8-ad51-73316b849dc7"
	Oct 19 16:33:02 functional-761710 kubelet[4897]: E1019 16:33:02.540603    4897 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-5tf7v" podUID="20c2626e-8f7b-4765-afb8-e58d
1b547085"
	Oct 19 16:33:05 functional-761710 kubelet[4897]: E1019 16:33:05.541116    4897 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-7vhtt" podUID="5e59f625-102a-4bda-ab22-74f294540da8"
	Oct 19 16:33:12 functional-761710 kubelet[4897]: E1019 16:33:12.541133    4897 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-f6zqq" podUID="a065c342-53de-4ec8-ad51-73316b849dc7"
	Oct 19 16:33:17 functional-761710 kubelet[4897]: E1019 16:33:17.540946    4897 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-5tf7v" podUID="20c2626e-8f7b-4765-afb8-e58d
1b547085"
	Oct 19 16:33:20 functional-761710 kubelet[4897]: E1019 16:33:20.540531    4897 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-7vhtt" podUID="5e59f625-102a-4bda-ab22-74f294540da8"
	Oct 19 16:33:26 functional-761710 kubelet[4897]: E1019 16:33:26.540821    4897 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-f6zqq" podUID="a065c342-53de-4ec8-ad51-73316b849dc7"
	Oct 19 16:33:29 functional-761710 kubelet[4897]: E1019 16:33:29.541010    4897 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-5tf7v" podUID="20c2626e-8f7b-4765-afb8-e58d
1b547085"
	Oct 19 16:33:32 functional-761710 kubelet[4897]: E1019 16:33:32.540659    4897 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-7vhtt" podUID="5e59f625-102a-4bda-ab22-74f294540da8"
	Oct 19 16:33:41 functional-761710 kubelet[4897]: E1019 16:33:41.543948    4897 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-f6zqq" podUID="a065c342-53de-4ec8-ad51-73316b849dc7"
	Oct 19 16:33:41 functional-761710 kubelet[4897]: E1019 16:33:41.543946    4897 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-5tf7v" podUID="20c2626e-8f7b-4765-afb8-e58d
1b547085"
	Oct 19 16:33:46 functional-761710 kubelet[4897]: E1019 16:33:46.541008    4897 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-7vhtt" podUID="5e59f625-102a-4bda-ab22-74f294540da8"
	Oct 19 16:33:52 functional-761710 kubelet[4897]: E1019 16:33:52.540632    4897 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-f6zqq" podUID="a065c342-53de-4ec8-ad51-73316b849dc7"
	Oct 19 16:33:56 functional-761710 kubelet[4897]: E1019 16:33:56.541101    4897 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-5tf7v" podUID="20c2626e-8f7b-4765-afb8-e58d
1b547085"
	Oct 19 16:33:58 functional-761710 kubelet[4897]: E1019 16:33:58.540930    4897 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-7vhtt" podUID="5e59f625-102a-4bda-ab22-74f294540da8"
	Oct 19 16:34:03 functional-761710 kubelet[4897]: E1019 16:34:03.540601    4897 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-f6zqq" podUID="a065c342-53de-4ec8-ad51-73316b849dc7"
	Oct 19 16:34:09 functional-761710 kubelet[4897]: E1019 16:34:09.546279    4897 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-5tf7v" podUID="20c2626e-8f7b-4765-afb8-e58d
1b547085"
	Oct 19 16:34:12 functional-761710 kubelet[4897]: E1019 16:34:12.541226    4897 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-7vhtt" podUID="5e59f625-102a-4bda-ab22-74f294540da8"
	Oct 19 16:34:18 functional-761710 kubelet[4897]: E1019 16:34:18.544083    4897 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-f6zqq" podUID="a065c342-53de-4ec8-ad51-73316b849dc7"
	Oct 19 16:34:23 functional-761710 kubelet[4897]: E1019 16:34:23.540344    4897 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-7vhtt" podUID="5e59f625-102a-4bda-ab22-74f294540da8"
	Oct 19 16:34:24 functional-761710 kubelet[4897]: E1019 16:34:24.540546    4897 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-5tf7v" podUID="20c2626e-8f7b-4765-afb8-e58d
1b547085"
	
	
	==> storage-provisioner [0a88ac387625bcc8f641d8fcfea039499913787ee17be78376e83cef579b85f3] <==
	I1019 16:27:57.899436       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-761710_683cce3d-761d-4c11-94e7-16840eda5b37!
	W1019 16:27:59.807950       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:27:59.811782       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:28:01.814709       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:28:01.818903       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:28:03.822526       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:28:03.827155       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:28:05.830416       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:28:05.836262       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:28:07.840278       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:28:07.844784       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:28:09.847502       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:28:09.853003       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:28:11.856149       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:28:11.861414       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:28:13.864430       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:28:13.870723       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:28:15.874276       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:28:15.878004       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:28:17.881509       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:28:17.885238       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:28:19.888227       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:28:19.892081       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:28:21.895540       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:28:21.899481       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [c968c41169b995e8f0e0af55ce448bee3ba2bf9ced4d72654dbc8b7c1eaf0ba4] <==
	W1019 16:33:59.044796       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:34:01.047612       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:34:01.052754       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:34:03.055655       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:34:03.059903       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:34:05.063149       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:34:05.068214       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:34:07.071550       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:34:07.075874       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:34:09.078836       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:34:09.083376       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:34:11.086076       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:34:11.091023       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:34:13.093955       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:34:13.098037       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:34:15.101568       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:34:15.107126       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:34:17.110194       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:34:17.114350       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:34:19.117460       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:34:19.121763       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:34:21.125294       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:34:21.129071       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:34:23.132727       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:34:23.137561       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-761710 -n functional-761710
helpers_test.go:269: (dbg) Run:  kubectl --context functional-761710 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount mysql-5bb876957f-f6zqq dashboard-metrics-scraper-77bf4d6c4c-5tf7v kubernetes-dashboard-855c9754f9-7vhtt
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-761710 describe pod busybox-mount mysql-5bb876957f-f6zqq dashboard-metrics-scraper-77bf4d6c4c-5tf7v kubernetes-dashboard-855c9754f9-7vhtt
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-761710 describe pod busybox-mount mysql-5bb876957f-f6zqq dashboard-metrics-scraper-77bf4d6c4c-5tf7v kubernetes-dashboard-855c9754f9-7vhtt: exit status 1 (74.431499ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-761710/192.168.49.2
	Start Time:       Sun, 19 Oct 2025 16:29:23 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  mount-munger:
	    Container ID:  containerd://f1a44b7013fc522b10faa629c43c6fad14dc3052a079440850320321c563ef05
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sun, 19 Oct 2025 16:29:26 +0000
	      Finished:     Sun, 19 Oct 2025 16:29:26 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8wtmj (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-8wtmj:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  5m1s   default-scheduler  Successfully assigned default/busybox-mount to functional-761710
	  Normal  Pulling    5m2s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     4m59s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.135s (2.135s including waiting). Image size: 2395207 bytes.
	  Normal  Created    4m59s  kubelet            Created container: mount-munger
	  Normal  Started    4m59s  kubelet            Started container mount-munger
	
	
	Name:             mysql-5bb876957f-f6zqq
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-761710/192.168.49.2
	Start Time:       Sun, 19 Oct 2025 16:29:28 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.12
	IPs:
	  IP:           10.244.0.12
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5fdst (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-5fdst:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  4m57s                 default-scheduler  Successfully assigned default/mysql-5bb876957f-f6zqq to functional-761710
	  Normal   Pulling    114s (x5 over 4m57s)  kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     112s (x5 over 4m52s)  kubelet            Failed to pull image "docker.io/mysql:5.7": failed to pull and unpack image "docker.io/library/mysql:5.7": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     112s (x5 over 4m52s)  kubelet            Error: ErrImagePull
	  Warning  Failed     33s (x15 over 4m51s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    7s (x17 over 4m51s)   kubelet            Back-off pulling image "docker.io/mysql:5.7"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-5tf7v" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-7vhtt" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-761710 describe pod busybox-mount mysql-5bb876957f-f6zqq dashboard-metrics-scraper-77bf4d6c4c-5tf7v kubernetes-dashboard-855c9754f9-7vhtt: exit status 1
--- FAIL: TestFunctional/parallel/DashboardCmd (302.16s)

                                                
                                    
x
+
TestKubernetesUpgrade (589.3s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-769998 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-769998 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (22.845283053s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-769998
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-769998: (1.864238193s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-769998 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-769998 status --format={{.Host}}: exit status 7 (69.720378ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-769998 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-769998 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (24.192082111s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-769998 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-769998 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-769998 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 106 (71.762088ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-769998] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-3708/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-3708/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-769998
	    minikube start -p kubernetes-upgrade-769998 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7699982 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-769998 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-769998 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-769998 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: exit status 80 (7m24.083885035s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-769998] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-3708/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-3708/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "kubernetes-upgrade-769998" primary control-plane node in "kubernetes-upgrade-769998" cluster
	* Pulling base image v0.0.48-1760609789-21757 ...
	* Preparing Kubernetes v1.34.1 on containerd 1.7.28 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 16:57:38.452827  231857 out.go:360] Setting OutFile to fd 1 ...
	I1019 16:57:38.453149  231857 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:57:38.453159  231857 out.go:374] Setting ErrFile to fd 2...
	I1019 16:57:38.453163  231857 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:57:38.453335  231857 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-3708/.minikube/bin
	I1019 16:57:38.453780  231857 out.go:368] Setting JSON to false
	I1019 16:57:38.455011  231857 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":2400,"bootTime":1760890658,"procs":329,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1019 16:57:38.455127  231857 start.go:143] virtualization: kvm guest
	I1019 16:57:38.456720  231857 out.go:179] * [kubernetes-upgrade-769998] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1019 16:57:38.457820  231857 notify.go:221] Checking for updates...
	I1019 16:57:38.457860  231857 out.go:179]   - MINIKUBE_LOCATION=21683
	I1019 16:57:38.459076  231857 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 16:57:38.460569  231857 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-3708/kubeconfig
	I1019 16:57:38.461806  231857 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-3708/.minikube
	I1019 16:57:38.462999  231857 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1019 16:57:38.464118  231857 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 16:57:38.465758  231857 config.go:182] Loaded profile config "kubernetes-upgrade-769998": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1019 16:57:38.466361  231857 driver.go:422] Setting default libvirt URI to qemu:///system
	I1019 16:57:38.494985  231857 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1019 16:57:38.495159  231857 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 16:57:38.567260  231857 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:85 SystemTime:2025-10-19 16:57:38.557268462 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 16:57:38.567376  231857 docker.go:319] overlay module found
	I1019 16:57:38.569196  231857 out.go:179] * Using the docker driver based on existing profile
	I1019 16:57:38.570541  231857 start.go:309] selected driver: docker
	I1019 16:57:38.570569  231857 start.go:930] validating driver "docker" against &{Name:kubernetes-upgrade-769998 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-769998 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 16:57:38.570675  231857 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 16:57:38.571450  231857 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 16:57:38.657924  231857 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:85 SystemTime:2025-10-19 16:57:38.645640186 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 16:57:38.658288  231857 cni.go:84] Creating CNI manager for ""
	I1019 16:57:38.658361  231857 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1019 16:57:38.658424  231857 start.go:353] cluster config:
	{Name:kubernetes-upgrade-769998 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-769998 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 16:57:38.660100  231857 out.go:179] * Starting "kubernetes-upgrade-769998" primary control-plane node in "kubernetes-upgrade-769998" cluster
	I1019 16:57:38.664653  231857 cache.go:124] Beginning downloading kic base image for docker with containerd
	I1019 16:57:38.666030  231857 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1019 16:57:38.666977  231857 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1019 16:57:38.667025  231857 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-3708/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1019 16:57:38.667116  231857 cache.go:59] Caching tarball of preloaded images
	I1019 16:57:38.667029  231857 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 16:57:38.667230  231857 preload.go:233] Found /home/jenkins/minikube-integration/21683-3708/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1019 16:57:38.667246  231857 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1019 16:57:38.667374  231857 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/kubernetes-upgrade-769998/config.json ...
	I1019 16:57:38.697273  231857 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1019 16:57:38.697296  231857 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1019 16:57:38.697324  231857 cache.go:233] Successfully downloaded all kic artifacts
	I1019 16:57:38.697347  231857 start.go:360] acquireMachinesLock for kubernetes-upgrade-769998: {Name:mk4f3129295eefcb91af911856874563ac2067d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 16:57:38.697397  231857 start.go:364] duration metric: took 32.776µs to acquireMachinesLock for "kubernetes-upgrade-769998"
	I1019 16:57:38.697414  231857 start.go:96] Skipping create...Using existing machine configuration
	I1019 16:57:38.697421  231857 fix.go:54] fixHost starting: 
	I1019 16:57:38.697614  231857 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-769998 --format={{.State.Status}}
	I1019 16:57:38.718775  231857 fix.go:112] recreateIfNeeded on kubernetes-upgrade-769998: state=Running err=<nil>
	W1019 16:57:38.718806  231857 fix.go:138] unexpected machine state, will restart: <nil>
	I1019 16:57:38.720686  231857 out.go:252] * Updating the running docker "kubernetes-upgrade-769998" container ...
	I1019 16:57:38.720725  231857 machine.go:94] provisionDockerMachine start ...
	I1019 16:57:38.720800  231857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-769998
	I1019 16:57:38.744070  231857 main.go:143] libmachine: Using SSH client type: native
	I1019 16:57:38.744334  231857 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33028 <nil> <nil>}
	I1019 16:57:38.744351  231857 main.go:143] libmachine: About to run SSH command:
	hostname
	I1019 16:57:38.882037  231857 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-769998
	
	I1019 16:57:38.882097  231857 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-769998"
	I1019 16:57:38.882169  231857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-769998
	I1019 16:57:38.901595  231857 main.go:143] libmachine: Using SSH client type: native
	I1019 16:57:38.901903  231857 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33028 <nil> <nil>}
	I1019 16:57:38.901927  231857 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-769998 && echo "kubernetes-upgrade-769998" | sudo tee /etc/hostname
	I1019 16:57:39.049359  231857 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-769998
	
	I1019 16:57:39.049434  231857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-769998
	I1019 16:57:39.072164  231857 main.go:143] libmachine: Using SSH client type: native
	I1019 16:57:39.072406  231857 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33028 <nil> <nil>}
	I1019 16:57:39.072432  231857 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-769998' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-769998/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-769998' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 16:57:39.214881  231857 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1019 16:57:39.214930  231857 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-3708/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-3708/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-3708/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-3708/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-3708/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-3708/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-3708/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-3708/.minikube}
	I1019 16:57:39.214956  231857 ubuntu.go:190] setting up certificates
	I1019 16:57:39.214968  231857 provision.go:84] configureAuth start
	I1019 16:57:39.215026  231857 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-769998
	I1019 16:57:39.235148  231857 provision.go:143] copyHostCerts
	I1019 16:57:39.235202  231857 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-3708/.minikube/cert.pem, removing ...
	I1019 16:57:39.235213  231857 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-3708/.minikube/cert.pem
	I1019 16:57:39.235283  231857 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-3708/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-3708/.minikube/cert.pem (1123 bytes)
	I1019 16:57:39.235422  231857 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-3708/.minikube/key.pem, removing ...
	I1019 16:57:39.235438  231857 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-3708/.minikube/key.pem
	I1019 16:57:39.235487  231857 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-3708/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-3708/.minikube/key.pem (1679 bytes)
	I1019 16:57:39.235587  231857 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-3708/.minikube/ca.pem, removing ...
	I1019 16:57:39.235598  231857 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-3708/.minikube/ca.pem
	I1019 16:57:39.235636  231857 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-3708/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-3708/.minikube/ca.pem (1082 bytes)
	I1019 16:57:39.235730  231857 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-3708/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-3708/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-3708/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-769998 san=[127.0.0.1 192.168.85.2 kubernetes-upgrade-769998 localhost minikube]
	I1019 16:57:39.302190  231857 provision.go:177] copyRemoteCerts
	I1019 16:57:39.302276  231857 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 16:57:39.302324  231857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-769998
	I1019 16:57:39.322373  231857 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33028 SSHKeyPath:/home/jenkins/minikube-integration/21683-3708/.minikube/machines/kubernetes-upgrade-769998/id_rsa Username:docker}
	I1019 16:57:39.427439  231857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3708/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 16:57:39.449041  231857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3708/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1019 16:57:39.470248  231857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3708/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1019 16:57:39.496224  231857 provision.go:87] duration metric: took 281.220701ms to configureAuth
	I1019 16:57:39.496260  231857 ubuntu.go:206] setting minikube options for container-runtime
	I1019 16:57:39.496500  231857 config.go:182] Loaded profile config "kubernetes-upgrade-769998": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1019 16:57:39.496520  231857 machine.go:97] duration metric: took 775.78581ms to provisionDockerMachine
	I1019 16:57:39.496531  231857 start.go:293] postStartSetup for "kubernetes-upgrade-769998" (driver="docker")
	I1019 16:57:39.496547  231857 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 16:57:39.496607  231857 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 16:57:39.496658  231857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-769998
	I1019 16:57:39.522386  231857 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33028 SSHKeyPath:/home/jenkins/minikube-integration/21683-3708/.minikube/machines/kubernetes-upgrade-769998/id_rsa Username:docker}
	I1019 16:57:39.622468  231857 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 16:57:39.626878  231857 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 16:57:39.626908  231857 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1019 16:57:39.626922  231857 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-3708/.minikube/addons for local assets ...
	I1019 16:57:39.626987  231857 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-3708/.minikube/files for local assets ...
	I1019 16:57:39.627119  231857 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-3708/.minikube/files/etc/ssl/certs/72542.pem -> 72542.pem in /etc/ssl/certs
	I1019 16:57:39.627244  231857 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1019 16:57:39.636683  231857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3708/.minikube/files/etc/ssl/certs/72542.pem --> /etc/ssl/certs/72542.pem (1708 bytes)
	I1019 16:57:39.657747  231857 start.go:296] duration metric: took 161.200022ms for postStartSetup
	I1019 16:57:39.657811  231857 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 16:57:39.657859  231857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-769998
	I1019 16:57:39.676656  231857 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33028 SSHKeyPath:/home/jenkins/minikube-integration/21683-3708/.minikube/machines/kubernetes-upgrade-769998/id_rsa Username:docker}
	I1019 16:57:39.771317  231857 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1019 16:57:39.775982  231857 fix.go:56] duration metric: took 1.078553811s for fixHost
	I1019 16:57:39.776007  231857 start.go:83] releasing machines lock for "kubernetes-upgrade-769998", held for 1.078600942s
	I1019 16:57:39.776098  231857 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-769998
	I1019 16:57:39.794552  231857 ssh_runner.go:195] Run: cat /version.json
	I1019 16:57:39.794580  231857 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 16:57:39.794642  231857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-769998
	I1019 16:57:39.794643  231857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-769998
	I1019 16:57:39.814655  231857 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33028 SSHKeyPath:/home/jenkins/minikube-integration/21683-3708/.minikube/machines/kubernetes-upgrade-769998/id_rsa Username:docker}
	I1019 16:57:39.814910  231857 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33028 SSHKeyPath:/home/jenkins/minikube-integration/21683-3708/.minikube/machines/kubernetes-upgrade-769998/id_rsa Username:docker}
	I1019 16:57:39.976990  231857 ssh_runner.go:195] Run: systemctl --version
	I1019 16:57:39.984318  231857 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 16:57:39.989623  231857 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 16:57:39.989706  231857 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 16:57:39.999749  231857 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1019 16:57:39.999793  231857 start.go:496] detecting cgroup driver to use...
	I1019 16:57:39.999830  231857 detect.go:190] detected "systemd" cgroup driver on host os
	I1019 16:57:39.999892  231857 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1019 16:57:40.020908  231857 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1019 16:57:40.038026  231857 docker.go:218] disabling cri-docker service (if available) ...
	I1019 16:57:40.038114  231857 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 16:57:40.054820  231857 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 16:57:40.069851  231857 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 16:57:40.176771  231857 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 16:57:40.271527  231857 docker.go:234] disabling docker service ...
	I1019 16:57:40.271593  231857 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 16:57:40.286756  231857 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 16:57:40.300174  231857 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 16:57:40.394957  231857 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 16:57:40.500136  231857 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 16:57:40.514480  231857 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 16:57:40.531145  231857 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1019 16:57:40.541080  231857 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1019 16:57:40.551943  231857 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1019 16:57:40.551998  231857 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1019 16:57:40.561375  231857 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1019 16:57:40.571544  231857 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1019 16:57:40.581661  231857 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1019 16:57:40.591457  231857 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 16:57:40.600794  231857 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1019 16:57:40.610818  231857 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1019 16:57:40.620766  231857 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1019 16:57:40.631206  231857 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 16:57:40.638835  231857 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 16:57:40.646965  231857 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 16:57:40.739264  231857 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1019 16:57:40.847353  231857 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1019 16:57:40.847407  231857 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1019 16:57:40.851656  231857 start.go:564] Will wait 60s for crictl version
	I1019 16:57:40.851715  231857 ssh_runner.go:195] Run: which crictl
	I1019 16:57:40.855343  231857 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1019 16:57:40.885942  231857 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.28
	RuntimeApiVersion:  v1
	I1019 16:57:40.886013  231857 ssh_runner.go:195] Run: containerd --version
	I1019 16:57:40.913113  231857 ssh_runner.go:195] Run: containerd --version
	I1019 16:57:40.943724  231857 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 1.7.28 ...
	I1019 16:57:40.945097  231857 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-769998 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 16:57:40.962988  231857 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1019 16:57:40.967374  231857 kubeadm.go:884] updating cluster {Name:kubernetes-upgrade-769998 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-769998 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1019 16:57:40.967463  231857 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1019 16:57:40.967539  231857 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 16:57:40.994494  231857 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-controller-manager:v1.34.1". assuming images are not preloaded.
	I1019 16:57:40.994553  231857 ssh_runner.go:195] Run: which lz4
	I1019 16:57:40.998751  231857 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1019 16:57:41.002367  231857 ssh_runner.go:356] copy: skipping /preloaded.tar.lz4 (exists)
	I1019 16:57:41.002383  231857 containerd.go:563] duration metric: took 3.667989ms to copy over tarball
	I1019 16:57:41.002430  231857 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1019 16:57:43.484203  231857 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.481739494s)
	I1019 16:57:43.484299  231857 kubeadm.go:910] preload failed, will try to load cached images: extracting tarball: 
	** stderr ** 
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Etc: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Arctic: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Canada: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/America: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Atlantic: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/US: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Indian: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Australia: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Asia: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Europe: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Africa: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Brazil: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Mexico: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Pacific: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Antarctica: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Chile: Cannot open: File exists
	tar: Exiting with failure status due to previous errors
	
	** /stderr **: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: Process exited with status 2
	stdout:
	
	stderr:
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Etc: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Arctic: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Canada: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/America: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Atlantic: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/US: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Indian: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Australia: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Asia: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Europe: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Africa: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Brazil: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Mexico: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Pacific: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Antarctica: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Chile: Cannot open: File exists
	tar: Exiting with failure status due to previous errors
	I1019 16:57:43.484379  231857 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 16:57:43.512834  231857 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-controller-manager:v1.34.1". assuming images are not preloaded.
	I1019 16:57:43.512859  231857 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1019 16:57:43.512929  231857 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 16:57:43.512939  231857 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1019 16:57:43.512958  231857 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1019 16:57:43.512989  231857 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1019 16:57:43.512997  231857 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1019 16:57:43.513017  231857 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1019 16:57:43.513041  231857 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1019 16:57:43.513189  231857 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1019 16:57:43.514506  231857 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1019 16:57:43.514512  231857 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1019 16:57:43.514506  231857 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1019 16:57:43.514512  231857 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1019 16:57:43.514512  231857 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 16:57:43.514564  231857 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1019 16:57:43.514645  231857 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1019 16:57:43.514871  231857 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1019 16:57:43.673601  231857 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.34.1" and sha "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97"
	I1019 16:57:43.673674  231857 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.34.1
	I1019 16:57:43.689093  231857 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.34.1" and sha "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813"
	I1019 16:57:43.689168  231857 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.34.1
	I1019 16:57:43.703419  231857 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.4-0" and sha "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115"
	I1019 16:57:43.703474  231857 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.4-0
	I1019 16:57:43.704003  231857 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97" in container runtime
	I1019 16:57:43.704041  231857 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1019 16:57:43.704098  231857 ssh_runner.go:195] Run: which crictl
	I1019 16:57:43.708722  231857 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.12.1" and sha "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969"
	I1019 16:57:43.708778  231857 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.12.1
	I1019 16:57:43.709878  231857 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.34.1" and sha "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7"
	I1019 16:57:43.709933  231857 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.34.1
	I1019 16:57:43.712641  231857 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f"
	I1019 16:57:43.712709  231857 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1019 16:57:43.713064  231857 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.34.1" and sha "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f"
	I1019 16:57:43.713125  231857 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.34.1
	I1019 16:57:43.724415  231857 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813" in container runtime
	I1019 16:57:43.724468  231857 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1019 16:57:43.724513  231857 ssh_runner.go:195] Run: which crictl
	I1019 16:57:43.739897  231857 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1019 16:57:43.740481  231857 cache_images.go:118] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115" in container runtime
	I1019 16:57:43.740528  231857 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1019 16:57:43.740576  231857 ssh_runner.go:195] Run: which crictl
	I1019 16:57:43.750086  231857 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969" in container runtime
	I1019 16:57:43.750130  231857 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1019 16:57:43.750174  231857 ssh_runner.go:195] Run: which crictl
	I1019 16:57:43.750828  231857 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7" in container runtime
	I1019 16:57:43.750905  231857 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1019 16:57:43.750959  231857 ssh_runner.go:195] Run: which crictl
	I1019 16:57:43.755566  231857 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1019 16:57:43.755630  231857 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f" in container runtime
	I1019 16:57:43.755663  231857 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1019 16:57:43.755667  231857 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1019 16:57:43.755693  231857 ssh_runner.go:195] Run: which crictl
	I1019 16:57:43.846247  231857 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21683-3708/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1019 16:57:43.846319  231857 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1019 16:57:43.846379  231857 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1019 16:57:43.846413  231857 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1019 16:57:43.846439  231857 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1019 16:57:43.846447  231857 ssh_runner.go:195] Run: which crictl
	I1019 16:57:43.846481  231857 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1019 16:57:43.846488  231857 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1019 16:57:43.846511  231857 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1019 16:57:43.880473  231857 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1019 16:57:43.880501  231857 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1019 16:57:43.880515  231857 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1019 16:57:43.880579  231857 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1019 16:57:43.880878  231857 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1019 16:57:43.880904  231857 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1019 16:57:43.925339  231857 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21683-3708/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1019 16:57:43.925423  231857 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1019 16:57:43.925459  231857 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21683-3708/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1019 16:57:43.925423  231857 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1019 16:57:43.925510  231857 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21683-3708/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1019 16:57:43.925545  231857 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1019 16:57:43.953785  231857 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21683-3708/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1019 16:57:43.953814  231857 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21683-3708/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1019 16:57:43.954402  231857 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21683-3708/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1019 16:57:44.859181  231857 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"
	I1019 16:57:44.859266  231857 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 16:57:44.884080  231857 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1019 16:57:44.884126  231857 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 16:57:44.884172  231857 ssh_runner.go:195] Run: which crictl
	I1019 16:57:44.888388  231857 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 16:57:44.918331  231857 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 16:57:44.945480  231857 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 16:57:44.973066  231857 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21683-3708/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1019 16:57:44.973155  231857 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1019 16:57:44.977205  231857 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1019 16:57:44.977223  231857 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1019 16:57:44.977263  231857 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1019 16:57:45.179430  231857 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21683-3708/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1019 16:57:45.179489  231857 cache_images.go:94] duration metric: took 1.666615072s to LoadCachedImages
	W1019 16:57:45.179575  231857 out.go:285] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/21683-3708/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/21683-3708/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1: no such file or directory
	I1019 16:57:45.179591  231857 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 containerd true true} ...
	I1019 16:57:45.179699  231857 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-769998 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-769998 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1019 16:57:45.179759  231857 ssh_runner.go:195] Run: sudo crictl info
	I1019 16:57:45.209299  231857 cni.go:84] Creating CNI manager for ""
	I1019 16:57:45.209320  231857 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1019 16:57:45.209338  231857 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1019 16:57:45.209362  231857 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-769998 NodeName:kubernetes-upgrade-769998 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca
.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1019 16:57:45.209494  231857 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "kubernetes-upgrade-769998"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 16:57:45.209555  231857 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1019 16:57:45.217968  231857 binaries.go:44] Found k8s binaries, skipping transfer
	I1019 16:57:45.218032  231857 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 16:57:45.226512  231857 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (329 bytes)
	I1019 16:57:45.240015  231857 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 16:57:45.253642  231857 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1019 16:57:45.266384  231857 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1019 16:57:45.270319  231857 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 16:57:45.381942  231857 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 16:57:45.398444  231857 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/kubernetes-upgrade-769998 for IP: 192.168.85.2
	I1019 16:57:45.398468  231857 certs.go:195] generating shared ca certs ...
	I1019 16:57:45.398488  231857 certs.go:227] acquiring lock for ca certs: {Name:mk932af9ac8cc3d6d1721cb604812f97e16ba01d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 16:57:45.398675  231857 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-3708/.minikube/ca.key
	I1019 16:57:45.398735  231857 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-3708/.minikube/proxy-client-ca.key
	I1019 16:57:45.398748  231857 certs.go:257] generating profile certs ...
	I1019 16:57:45.398850  231857 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/kubernetes-upgrade-769998/client.key
	I1019 16:57:45.398911  231857 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/kubernetes-upgrade-769998/apiserver.key.bbb1e6a4
	I1019 16:57:45.398975  231857 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/kubernetes-upgrade-769998/proxy-client.key
	I1019 16:57:45.399150  231857 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3708/.minikube/certs/7254.pem (1338 bytes)
	W1019 16:57:45.399191  231857 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-3708/.minikube/certs/7254_empty.pem, impossibly tiny 0 bytes
	I1019 16:57:45.399206  231857 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3708/.minikube/certs/ca-key.pem (1675 bytes)
	I1019 16:57:45.399248  231857 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3708/.minikube/certs/ca.pem (1082 bytes)
	I1019 16:57:45.399277  231857 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3708/.minikube/certs/cert.pem (1123 bytes)
	I1019 16:57:45.399308  231857 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3708/.minikube/certs/key.pem (1679 bytes)
	I1019 16:57:45.399373  231857 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-3708/.minikube/files/etc/ssl/certs/72542.pem (1708 bytes)
	I1019 16:57:45.400196  231857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3708/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 16:57:45.421325  231857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3708/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1019 16:57:45.441519  231857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3708/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 16:57:45.462811  231857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3708/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1019 16:57:45.483296  231857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/kubernetes-upgrade-769998/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1019 16:57:45.505399  231857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/kubernetes-upgrade-769998/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1019 16:57:45.526930  231857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/kubernetes-upgrade-769998/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 16:57:45.548830  231857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/kubernetes-upgrade-769998/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1019 16:57:45.569592  231857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3708/.minikube/certs/7254.pem --> /usr/share/ca-certificates/7254.pem (1338 bytes)
	I1019 16:57:45.589424  231857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3708/.minikube/files/etc/ssl/certs/72542.pem --> /usr/share/ca-certificates/72542.pem (1708 bytes)
	I1019 16:57:45.611283  231857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3708/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 16:57:45.631674  231857 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 16:57:45.647002  231857 ssh_runner.go:195] Run: openssl version
	I1019 16:57:45.654259  231857 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7254.pem && ln -fs /usr/share/ca-certificates/7254.pem /etc/ssl/certs/7254.pem"
	I1019 16:57:45.665162  231857 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7254.pem
	I1019 16:57:45.669989  231857 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 19 16:27 /usr/share/ca-certificates/7254.pem
	I1019 16:57:45.670061  231857 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7254.pem
	I1019 16:57:45.714829  231857 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7254.pem /etc/ssl/certs/51391683.0"
	I1019 16:57:45.724857  231857 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/72542.pem && ln -fs /usr/share/ca-certificates/72542.pem /etc/ssl/certs/72542.pem"
	I1019 16:57:45.735345  231857 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/72542.pem
	I1019 16:57:45.739591  231857 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 19 16:27 /usr/share/ca-certificates/72542.pem
	I1019 16:57:45.739663  231857 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/72542.pem
	I1019 16:57:45.784131  231857 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/72542.pem /etc/ssl/certs/3ec20f2e.0"
	I1019 16:57:45.793694  231857 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 16:57:45.804407  231857 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 16:57:45.808220  231857 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 19 16:21 /usr/share/ca-certificates/minikubeCA.pem
	I1019 16:57:45.808309  231857 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 16:57:45.853733  231857 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 16:57:45.863006  231857 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1019 16:57:45.867465  231857 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1019 16:57:45.916869  231857 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1019 16:57:45.953194  231857 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1019 16:57:45.991385  231857 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1019 16:57:46.028083  231857 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1019 16:57:46.068271  231857 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1019 16:57:46.102441  231857 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-769998 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-769998 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 16:57:46.102544  231857 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1019 16:57:46.102599  231857 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 16:57:46.140430  231857 cri.go:89] found id: ""
	I1019 16:57:46.140508  231857 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 16:57:46.148964  231857 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1019 16:57:46.148985  231857 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1019 16:57:46.149034  231857 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1019 16:57:46.156781  231857 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1019 16:57:46.157464  231857 kubeconfig.go:125] found "kubernetes-upgrade-769998" server: "https://192.168.85.2:8443"
	I1019 16:57:46.158480  231857 kapi.go:59] client config for kubernetes-upgrade-769998: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-3708/.minikube/profiles/kubernetes-upgrade-769998/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-3708/.minikube/profiles/kubernetes-upgrade-769998/client.key", CAFile:"/home/jenkins/minikube-integration/21683-3708/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819d20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1019 16:57:46.158840  231857 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1019 16:57:46.158855  231857 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1019 16:57:46.158860  231857 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1019 16:57:46.158863  231857 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1019 16:57:46.158867  231857 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1019 16:57:46.159235  231857 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1019 16:57:46.167444  231857 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1019 16:57:46.167482  231857 kubeadm.go:602] duration metric: took 18.488523ms to restartPrimaryControlPlane
	I1019 16:57:46.167494  231857 kubeadm.go:403] duration metric: took 65.063145ms to StartCluster
	I1019 16:57:46.167510  231857 settings.go:142] acquiring lock: {Name:mk9f6d8488524ed11aac6e6756fd16e3b7b486fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 16:57:46.167570  231857 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-3708/kubeconfig
	I1019 16:57:46.168593  231857 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3708/kubeconfig: {Name:mk8f9aa104a9030ac2c43bdf10909ff66220bc19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 16:57:46.168832  231857 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1019 16:57:46.168905  231857 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1019 16:57:46.168995  231857 config.go:182] Loaded profile config "kubernetes-upgrade-769998": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1019 16:57:46.169008  231857 addons.go:70] Setting default-storageclass=true in profile "kubernetes-upgrade-769998"
	I1019 16:57:46.169030  231857 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-769998"
	I1019 16:57:46.168999  231857 addons.go:70] Setting storage-provisioner=true in profile "kubernetes-upgrade-769998"
	I1019 16:57:46.169164  231857 addons.go:239] Setting addon storage-provisioner=true in "kubernetes-upgrade-769998"
	W1019 16:57:46.169174  231857 addons.go:248] addon storage-provisioner should already be in state true
	I1019 16:57:46.169210  231857 host.go:66] Checking if "kubernetes-upgrade-769998" exists ...
	I1019 16:57:46.169363  231857 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-769998 --format={{.State.Status}}
	I1019 16:57:46.169683  231857 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-769998 --format={{.State.Status}}
	I1019 16:57:46.173106  231857 out.go:179] * Verifying Kubernetes components...
	I1019 16:57:46.178264  231857 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 16:57:46.190820  231857 kapi.go:59] client config for kubernetes-upgrade-769998: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-3708/.minikube/profiles/kubernetes-upgrade-769998/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-3708/.minikube/profiles/kubernetes-upgrade-769998/client.key", CAFile:"/home/jenkins/minikube-integration/21683-3708/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819d20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1019 16:57:46.191185  231857 addons.go:239] Setting addon default-storageclass=true in "kubernetes-upgrade-769998"
	W1019 16:57:46.191208  231857 addons.go:248] addon default-storageclass should already be in state true
	I1019 16:57:46.191238  231857 host.go:66] Checking if "kubernetes-upgrade-769998" exists ...
	I1019 16:57:46.191756  231857 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-769998 --format={{.State.Status}}
	I1019 16:57:46.192103  231857 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 16:57:46.193596  231857 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 16:57:46.193617  231857 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1019 16:57:46.193672  231857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-769998
	I1019 16:57:46.214072  231857 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1019 16:57:46.214098  231857 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1019 16:57:46.214159  231857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-769998
	I1019 16:57:46.224735  231857 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33028 SSHKeyPath:/home/jenkins/minikube-integration/21683-3708/.minikube/machines/kubernetes-upgrade-769998/id_rsa Username:docker}
	I1019 16:57:46.240626  231857 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33028 SSHKeyPath:/home/jenkins/minikube-integration/21683-3708/.minikube/machines/kubernetes-upgrade-769998/id_rsa Username:docker}
	I1019 16:57:46.318751  231857 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 16:57:46.332940  231857 api_server.go:52] waiting for apiserver process to appear ...
	I1019 16:57:46.333014  231857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 16:57:46.344996  231857 api_server.go:72] duration metric: took 176.128276ms to wait for apiserver process to appear ...
	I1019 16:57:46.345025  231857 api_server.go:88] waiting for apiserver healthz status ...
	I1019 16:57:46.345074  231857 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1019 16:57:46.359338  231857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 16:57:46.367676  231857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1019 16:57:48.349824  231857 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1019 16:57:48.349869  231857 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1019 16:57:48.349896  231857 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1019 16:57:50.354986  231857 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1019 16:57:50.355011  231857 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1019 16:57:50.355025  231857 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1019 16:57:52.360657  231857 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1019 16:57:52.360687  231857 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1019 16:57:52.360706  231857 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1019 16:57:54.366118  231857 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1019 16:57:54.366154  231857 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1019 16:57:54.366186  231857 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1019 16:57:56.371377  231857 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1019 16:57:56.371406  231857 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1019 16:57:56.371424  231857 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1019 16:57:58.377225  231857 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1019 16:57:58.377265  231857 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1019 16:57:58.377284  231857 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1019 16:58:00.387028  231857 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1019 16:58:00.387082  231857 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1019 16:58:00.387099  231857 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1019 16:58:02.394252  231857 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1019 16:58:02.394296  231857 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1019 16:58:02.394331  231857 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1019 16:58:04.401780  231857 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1019 16:58:04.401816  231857 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1019 16:58:04.401831  231857 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1019 16:58:06.407778  231857 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1019 16:58:06.407810  231857 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1019 16:58:06.407829  231857 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1019 16:58:11.409394  231857 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1019 16:58:11.409445  231857 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1019 16:58:16.410577  231857 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1019 16:58:16.410624  231857 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1019 16:58:21.413277  231857 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1019 16:58:21.413321  231857 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1019 16:58:26.414640  231857 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1019 16:58:26.414683  231857 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1019 16:58:31.418324  231857 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1019 16:58:31.418376  231857 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1019 16:58:36.419727  231857 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1019 16:58:36.419776  231857 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1019 16:58:41.422291  231857 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1019 16:58:41.422331  231857 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1019 16:58:46.423762  231857 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1019 16:58:46.423855  231857 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1019 16:58:46.423922  231857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1019 17:03:46.657513  231857 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6m0.289802423s)
	W1019 17:03:46.657929  231857 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "storage.k8s.io/v1, Resource=storageclasses", GroupVersionKind: "storage.k8s.io/v1, Kind=StorageClass"
	Name: "standard", Namespace: ""
	from server for: "/etc/kubernetes/addons/storageclass.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get storageclasses.storage.k8s.io standard)
	I1019 17:03:46.657199  231857 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6m0.297803966s)
	W1019 17:03:46.658084  231857 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "/v1, Resource=serviceaccounts", GroupVersionKind: "/v1, Kind=ServiceAccount"
	Name: "storage-provisioner", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts storage-provisioner)
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "rbac.authorization.k8s.io/v1, Resource=clusterrolebindings", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding"
	Name: "storage-provisioner", Namespace: ""
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io storage-provisioner)
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "rbac.authorization.k8s.io/v1, Resource=roles", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=Role"
	Name: "system:persistent-volume-provisioner", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io system:persistent-volume-provisioner)
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "rbac.authorization.k8s.io/v1, Resource=rolebindings", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=RoleBinding"
	Name: "system:persistent-volume-provisioner", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get rolebindings.rbac.authorization.k8s.io system:persistent-volume-provisioner)
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "/v1, Resource=endpoints", GroupVersionKind: "/v1, Kind=Endpoints"
	Name: "k8s.io-minikube-hostpath", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get endpoints k8s.io-minikube-hostpath)
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "/v1, Resource=pods", GroupVersionKind: "/v1, Kind=Pod"
	Name: "storage-provisioner", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get pods storage-provisioner)
	W1019 17:03:46.658435  231857 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "storage.k8s.io/v1, Resource=storageclasses", GroupVersionKind: "storage.k8s.io/v1, Kind=StorageClass"
	Name: "standard", Namespace: ""
	from server for: "/etc/kubernetes/addons/storageclass.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get storageclasses.storage.k8s.io standard)
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "storage.k8s.io/v1, Resource=storageclasses", GroupVersionKind: "storage.k8s.io/v1, Kind=StorageClass"
	Name: "standard", Namespace: ""
	from server for: "/etc/kubernetes/addons/storageclass.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get storageclasses.storage.k8s.io standard)
	]
	I1019 17:03:46.657665  231857 ssh_runner.go:235] Completed: sudo crictl ps -a --quiet --name=kube-apiserver: (5m0.233724921s)
	I1019 17:03:46.658816  231857 cri.go:89] found id: "c9af4fc18c6ae8055f064d6d24af4ca96afa58531b99e39e7be650c36b15fc05"
	I1019 17:03:46.658827  231857 cri.go:89] found id: ""
	I1019 17:03:46.658837  231857 logs.go:282] 1 containers: [c9af4fc18c6ae8055f064d6d24af4ca96afa58531b99e39e7be650c36b15fc05]
	I1019 17:03:46.658896  231857 ssh_runner.go:195] Run: which crictl
	W1019 17:03:46.660689  231857 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "/v1, Resource=serviceaccounts", GroupVersionKind: "/v1, Kind=ServiceAccount"
	Name: "storage-provisioner", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts storage-provisioner)
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "rbac.authorization.k8s.io/v1, Resource=clusterrolebindings", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding"
	Name: "storage-provisioner", Namespace: ""
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io storage-provisioner)
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "rbac.authorization.k8s.io/v1, Resource=roles", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=Role"
	Name: "system:persistent-volume-provisioner", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io system:persistent-volume-provisioner)
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "rbac.authorization.k8s.io/v1, Resource=rolebindings", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=RoleBinding"
	Name: "system:persistent-volume-provisioner", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get rolebindings.rbac.authorization.k8s.io system:persistent-volume-provisioner)
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "/v1, Resource=endpoints", GroupVersionKind: "/v1, Kind=Endpoints"
	Name: "k8s.io-minikube-hostpath", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get endpoints k8s.io-minikube-hostpath)
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "/v1, Resource=pods", GroupVersionKind: "/v1, Kind=Pod"
	Name: "storage-provisioner", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get pods storage-provisioner)
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "/v1, Resource=serviceaccounts", GroupVersionKind: "/v1, Kind=ServiceAccount"
	Name: "storage-provisioner", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts storage-provisioner)
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "rbac.authorization.k8s.io/v1, Resource=clusterrolebindings", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding"
	Name: "storage-provisioner", Namespace: ""
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io storage-provisioner)
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "rbac.authorization.k8s.io/v1, Resource=roles", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=Role"
	Name: "system:persistent-volume-provisioner", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io system:persistent-volume-provisioner)
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "rbac.authorization.k8s.io/v1, Resource=rolebindings", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=RoleBinding"
	Name: "system:persistent-volume-provisioner", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get rolebindings.rbac.authorization.k8s.io system:persistent-volume-provisioner)
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "/v1, Resource=endpoints", GroupVersionKind: "/v1, Kind=Endpoints"
	Name: "k8s.io-minikube-hostpath", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get endpoints k8s.io-minikube-hostpath)
	Error from server (Timeout): error when retrieving current configuration of:
	Resource: "/v1, Resource=pods", GroupVersionKind: "/v1, Kind=Pod"
	Name: "storage-provisioner", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get pods storage-provisioner)
	]
	I1019 17:03:46.662567  231857 out.go:179] * Enabled addons: 
	I1019 17:03:46.664080  231857 addons.go:515] duration metric: took 6m0.495172756s for enable addons: enabled=[]
	I1019 17:03:46.677935  231857 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1019 17:03:46.678058  231857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1019 17:03:46.742872  231857 cri.go:89] found id: "f2207ec2cee55c2c277a7c846fd8590213a107d1f5f032e7686d762de8cc3a03"
	I1019 17:03:46.742898  231857 cri.go:89] found id: ""
	I1019 17:03:46.742908  231857 logs.go:282] 1 containers: [f2207ec2cee55c2c277a7c846fd8590213a107d1f5f032e7686d762de8cc3a03]
	I1019 17:03:46.742968  231857 ssh_runner.go:195] Run: which crictl
	I1019 17:03:46.758318  231857 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1019 17:03:46.758404  231857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1019 17:03:46.824796  231857 cri.go:89] found id: ""
	I1019 17:03:46.824895  231857 logs.go:282] 0 containers: []
	W1019 17:03:46.824921  231857 logs.go:284] No container was found matching "coredns"
	I1019 17:03:46.824959  231857 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1019 17:03:46.825036  231857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1019 17:03:46.859926  231857 cri.go:89] found id: "32ca50174c9bd50a2b511f78610865a621358eebc785693f10178e9f68907144"
	I1019 17:03:46.860013  231857 cri.go:89] found id: ""
	I1019 17:03:46.860030  231857 logs.go:282] 1 containers: [32ca50174c9bd50a2b511f78610865a621358eebc785693f10178e9f68907144]
	I1019 17:03:46.860110  231857 ssh_runner.go:195] Run: which crictl
	I1019 17:03:46.864783  231857 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1019 17:03:46.864862  231857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1019 17:03:46.898200  231857 cri.go:89] found id: ""
	I1019 17:03:46.898226  231857 logs.go:282] 0 containers: []
	W1019 17:03:46.898236  231857 logs.go:284] No container was found matching "kube-proxy"
	I1019 17:03:46.898243  231857 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1019 17:03:46.898313  231857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1019 17:03:46.939298  231857 cri.go:89] found id: "44666239e810278a93aa60b778515ab258a99a3897aa3261e67dbcc63e6a017c"
	I1019 17:03:46.939321  231857 cri.go:89] found id: ""
	I1019 17:03:46.939330  231857 logs.go:282] 1 containers: [44666239e810278a93aa60b778515ab258a99a3897aa3261e67dbcc63e6a017c]
	I1019 17:03:46.939633  231857 ssh_runner.go:195] Run: which crictl
	I1019 17:03:46.945710  231857 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1019 17:03:46.945789  231857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1019 17:03:46.986019  231857 cri.go:89] found id: ""
	I1019 17:03:46.986057  231857 logs.go:282] 0 containers: []
	W1019 17:03:46.986069  231857 logs.go:284] No container was found matching "kindnet"
	I1019 17:03:46.986077  231857 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1019 17:03:46.986129  231857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1019 17:03:47.024865  231857 cri.go:89] found id: ""
	I1019 17:03:47.024909  231857 logs.go:282] 0 containers: []
	W1019 17:03:47.024920  231857 logs.go:284] No container was found matching "storage-provisioner"
	I1019 17:03:47.024937  231857 logs.go:123] Gathering logs for kubelet ...
	I1019 17:03:47.024950  231857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1019 17:03:47.074808  231857 logs.go:138] Found kubelet problem: Oct 19 16:57:34 kubernetes-upgrade-769998 kubelet[1158]: E1019 16:57:34.855579    1158 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-kubernetes-upgrade-769998\" is forbidden: User \"system:node:kubernetes-upgrade-769998\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'kubernetes-upgrade-769998' and this object" podUID="6e2c107735650dcdcb5b1ab91d46fddc" pod="kube-system/etcd-kubernetes-upgrade-769998"
	W1019 17:03:47.075193  231857 logs.go:138] Found kubelet problem: Oct 19 16:57:34 kubernetes-upgrade-769998 kubelet[1158]:         pods "kube-controller-manager-kubernetes-upgrade-769998" is forbidden: User "system:node:kubernetes-upgrade-769998" cannot get resource "pods" in API group "" in the namespace "kube-system": no relationship found between node 'kubernetes-upgrade-769998' and this object
	W1019 17:03:47.075565  231857 logs.go:138] Found kubelet problem: Oct 19 16:57:34 kubernetes-upgrade-769998 kubelet[1158]:         pods "kube-scheduler-kubernetes-upgrade-769998" is forbidden: User "system:node:kubernetes-upgrade-769998" cannot get resource "pods" in API group "" in the namespace "kube-system": no relationship found between node 'kubernetes-upgrade-769998' and this object
	I1019 17:03:47.138725  231857 logs.go:123] Gathering logs for describe nodes ...
	I1019 17:03:47.138778  231857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1019 17:04:47.220696  231857 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (1m0.081892136s)
	W1019 17:04:47.220747  231857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	 output: 
	** stderr ** 
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	
	** /stderr **
	I1019 17:04:47.220764  231857 logs.go:123] Gathering logs for kube-apiserver [c9af4fc18c6ae8055f064d6d24af4ca96afa58531b99e39e7be650c36b15fc05] ...
	I1019 17:04:47.220779  231857 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c9af4fc18c6ae8055f064d6d24af4ca96afa58531b99e39e7be650c36b15fc05"
	W1019 17:04:47.248317  231857 logs.go:130] failed kube-apiserver [c9af4fc18c6ae8055f064d6d24af4ca96afa58531b99e39e7be650c36b15fc05]: command: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c9af4fc18c6ae8055f064d6d24af4ca96afa58531b99e39e7be650c36b15fc05" /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c9af4fc18c6ae8055f064d6d24af4ca96afa58531b99e39e7be650c36b15fc05": Process exited with status 1
	stdout:
	
	stderr:
	E1019 17:04:47.246121    3518 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c9af4fc18c6ae8055f064d6d24af4ca96afa58531b99e39e7be650c36b15fc05\": not found" containerID="c9af4fc18c6ae8055f064d6d24af4ca96afa58531b99e39e7be650c36b15fc05"
	time="2025-10-19T17:04:47Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"c9af4fc18c6ae8055f064d6d24af4ca96afa58531b99e39e7be650c36b15fc05\": not found"
	 output: 
	** stderr ** 
	E1019 17:04:47.246121    3518 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c9af4fc18c6ae8055f064d6d24af4ca96afa58531b99e39e7be650c36b15fc05\": not found" containerID="c9af4fc18c6ae8055f064d6d24af4ca96afa58531b99e39e7be650c36b15fc05"
	time="2025-10-19T17:04:47Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"c9af4fc18c6ae8055f064d6d24af4ca96afa58531b99e39e7be650c36b15fc05\": not found"
	
	** /stderr **
	I1019 17:04:47.248349  231857 logs.go:123] Gathering logs for etcd [f2207ec2cee55c2c277a7c846fd8590213a107d1f5f032e7686d762de8cc3a03] ...
	I1019 17:04:47.248384  231857 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f2207ec2cee55c2c277a7c846fd8590213a107d1f5f032e7686d762de8cc3a03"
	I1019 17:04:47.283441  231857 logs.go:123] Gathering logs for kube-controller-manager [44666239e810278a93aa60b778515ab258a99a3897aa3261e67dbcc63e6a017c] ...
	I1019 17:04:47.283468  231857 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 44666239e810278a93aa60b778515ab258a99a3897aa3261e67dbcc63e6a017c"
	I1019 17:04:47.311947  231857 logs.go:123] Gathering logs for containerd ...
	I1019 17:04:47.311972  231857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1019 17:04:47.382965  231857 logs.go:123] Gathering logs for dmesg ...
	I1019 17:04:47.382998  231857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1019 17:04:47.399604  231857 logs.go:123] Gathering logs for kube-scheduler [32ca50174c9bd50a2b511f78610865a621358eebc785693f10178e9f68907144] ...
	I1019 17:04:47.399632  231857 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 32ca50174c9bd50a2b511f78610865a621358eebc785693f10178e9f68907144"
	I1019 17:04:47.429511  231857 logs.go:123] Gathering logs for container status ...
	I1019 17:04:47.429543  231857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1019 17:04:47.463038  231857 out.go:374] Setting ErrFile to fd 2...
	I1019 17:04:47.463083  231857 out.go:408] TERM=,COLORTERM=, which probably does not support color
	W1019 17:04:47.463137  231857 out.go:285] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1019 17:04:47.463154  231857 out.go:285]   Oct 19 16:57:34 kubernetes-upgrade-769998 kubelet[1158]: E1019 16:57:34.855579    1158 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-kubernetes-upgrade-769998\" is forbidden: User \"system:node:kubernetes-upgrade-769998\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'kubernetes-upgrade-769998' and this object" podUID="6e2c107735650dcdcb5b1ab91d46fddc" pod="kube-system/etcd-kubernetes-upgrade-769998"
	  Oct 19 16:57:34 kubernetes-upgrade-769998 kubelet[1158]: E1019 16:57:34.855579    1158 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-kubernetes-upgrade-769998\" is forbidden: User \"system:node:kubernetes-upgrade-769998\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'kubernetes-upgrade-769998' and this object" podUID="6e2c107735650dcdcb5b1ab91d46fddc" pod="kube-system/etcd-kubernetes-upgrade-769998"
	W1019 17:04:47.463169  231857 out.go:285]   Oct 19 16:57:34 kubernetes-upgrade-769998 kubelet[1158]:         pods "kube-controller-manager-kubernetes-upgrade-769998" is forbidden: User "system:node:kubernetes-upgrade-769998" cannot get resource "pods" in API group "" in the namespace "kube-system": no relationship found between node 'kubernetes-upgrade-769998' and this object
	  Oct 19 16:57:34 kubernetes-upgrade-769998 kubelet[1158]:         pods "kube-controller-manager-kubernetes-upgrade-769998" is forbidden: User "system:node:kubernetes-upgrade-769998" cannot get resource "pods" in API group "" in the namespace "kube-system": no relationship found between node 'kubernetes-upgrade-769998' and this object
	W1019 17:04:47.463187  231857 out.go:285]   Oct 19 16:57:34 kubernetes-upgrade-769998 kubelet[1158]:         pods "kube-scheduler-kubernetes-upgrade-769998" is forbidden: User "system:node:kubernetes-upgrade-769998" cannot get resource "pods" in API group "" in the namespace "kube-system": no relationship found between node 'kubernetes-upgrade-769998' and this object
	  Oct 19 16:57:34 kubernetes-upgrade-769998 kubelet[1158]:         pods "kube-scheduler-kubernetes-upgrade-769998" is forbidden: User "system:node:kubernetes-upgrade-769998" cannot get resource "pods" in API group "" in the namespace "kube-system": no relationship found between node 'kubernetes-upgrade-769998' and this object
	I1019 17:04:47.463198  231857 out.go:374] Setting ErrFile to fd 2...
	I1019 17:04:47.463207  231857 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:04:57.465108  231857 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1019 17:05:02.469290  231857 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1019 17:05:02.472103  231857 out.go:203] 
	W1019 17:05:02.473794  231857 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W1019 17:05:02.473817  231857 out.go:285] * 
	* 
	W1019 17:05:02.476432  231857 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 17:05:02.478603  231857 out.go:203] 

                                                
                                                
** /stderr **
version_upgrade_test.go:277: start after failed upgrade: out/minikube-linux-amd64 start -p kubernetes-upgrade-769998 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: exit status 80
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2025-10-19 17:05:02.536154458 +0000 UTC m=+2673.686443519
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestKubernetesUpgrade]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect kubernetes-upgrade-769998
helpers_test.go:243: (dbg) docker inspect kubernetes-upgrade-769998:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "96f4bb6be83240be1993b5b8a6680f0745a1af735eb6a91eb6eda33c4fc9e197",
	        "Created": "2025-10-19T16:56:55.681760601Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 225612,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T16:57:14.338651799Z",
	            "FinishedAt": "2025-10-19T16:57:13.559550878Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/96f4bb6be83240be1993b5b8a6680f0745a1af735eb6a91eb6eda33c4fc9e197/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/96f4bb6be83240be1993b5b8a6680f0745a1af735eb6a91eb6eda33c4fc9e197/hostname",
	        "HostsPath": "/var/lib/docker/containers/96f4bb6be83240be1993b5b8a6680f0745a1af735eb6a91eb6eda33c4fc9e197/hosts",
	        "LogPath": "/var/lib/docker/containers/96f4bb6be83240be1993b5b8a6680f0745a1af735eb6a91eb6eda33c4fc9e197/96f4bb6be83240be1993b5b8a6680f0745a1af735eb6a91eb6eda33c4fc9e197-json.log",
	        "Name": "/kubernetes-upgrade-769998",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-769998:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "kubernetes-upgrade-769998",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "96f4bb6be83240be1993b5b8a6680f0745a1af735eb6a91eb6eda33c4fc9e197",
	                "LowerDir": "/var/lib/docker/overlay2/09c66f1740fd3a8d2551d3ad36532591123037ce96d815c847a99cf5778ca74e-init/diff:/var/lib/docker/overlay2/679788dc5d6c9ac02347cc41d6b5035c8cb9d202024310ee3487f11ae7ab51e7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/09c66f1740fd3a8d2551d3ad36532591123037ce96d815c847a99cf5778ca74e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/09c66f1740fd3a8d2551d3ad36532591123037ce96d815c847a99cf5778ca74e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/09c66f1740fd3a8d2551d3ad36532591123037ce96d815c847a99cf5778ca74e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-769998",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-769998/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-769998",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-769998",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-769998",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c2d29f185bd65b6ab98ce092c8aea96f1fd75c55f98f95efa01df42349b28427",
	            "SandboxKey": "/var/run/docker/netns/c2d29f185bd6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33028"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33029"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33032"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33030"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33031"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-769998": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5a:22:4d:7d:27:8b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f46701fa12778b0c8ed927b8607078f7beacc00b07d561d6f2e81e7384a6dd44",
	                    "EndpointID": "a5bfd5fabb8fed2e894f7c8d8d22d58ef15278bc11e890ba01abf9fbc89a8c88",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "kubernetes-upgrade-769998",
	                        "96f4bb6be832"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-769998 -n kubernetes-upgrade-769998
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-769998 -n kubernetes-upgrade-769998: exit status 2 (16.49535174s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-769998 logs -n 25
E1019 17:05:21.347195    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/kindnet-102079/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:05:26.468733    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/kindnet-102079/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:05:28.510894    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/calico-102079/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:05:28.517313    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/calico-102079/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:05:28.528850    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/calico-102079/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-769998 logs -n 25: (1m0.869946125s)
helpers_test.go:260: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ stop    │ -p embed-certs-493778 --alsologtostderr -v=3                                                                                                                                                                                                        │ embed-certs-493778           │ jenkins │ v1.37.0 │ 19 Oct 25 17:03 UTC │ 19 Oct 25 17:03 UTC │
	│ addons  │ enable dashboard -p embed-certs-493778 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ embed-certs-493778           │ jenkins │ v1.37.0 │ 19 Oct 25 17:03 UTC │ 19 Oct 25 17:03 UTC │
	│ start   │ -p embed-certs-493778 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-493778           │ jenkins │ v1.37.0 │ 19 Oct 25 17:03 UTC │ 19 Oct 25 17:04 UTC │
	│ addons  │ enable metrics-server -p no-preload-189367 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ no-preload-189367            │ jenkins │ v1.37.0 │ 19 Oct 25 17:04 UTC │ 19 Oct 25 17:04 UTC │
	│ stop    │ -p no-preload-189367 --alsologtostderr -v=3                                                                                                                                                                                                         │ no-preload-189367            │ jenkins │ v1.37.0 │ 19 Oct 25 17:04 UTC │ 19 Oct 25 17:04 UTC │
	│ addons  │ enable dashboard -p no-preload-189367 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ no-preload-189367            │ jenkins │ v1.37.0 │ 19 Oct 25 17:04 UTC │ 19 Oct 25 17:04 UTC │
	│ start   │ -p no-preload-189367 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-189367            │ jenkins │ v1.37.0 │ 19 Oct 25 17:04 UTC │ 19 Oct 25 17:04 UTC │
	│ image   │ old-k8s-version-309999 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-309999       │ jenkins │ v1.37.0 │ 19 Oct 25 17:04 UTC │ 19 Oct 25 17:04 UTC │
	│ pause   │ -p old-k8s-version-309999 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-309999       │ jenkins │ v1.37.0 │ 19 Oct 25 17:04 UTC │ 19 Oct 25 17:04 UTC │
	│ unpause │ -p old-k8s-version-309999 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-309999       │ jenkins │ v1.37.0 │ 19 Oct 25 17:04 UTC │ 19 Oct 25 17:04 UTC │
	│ delete  │ -p old-k8s-version-309999                                                                                                                                                                                                                           │ old-k8s-version-309999       │ jenkins │ v1.37.0 │ 19 Oct 25 17:04 UTC │ 19 Oct 25 17:04 UTC │
	│ delete  │ -p old-k8s-version-309999                                                                                                                                                                                                                           │ old-k8s-version-309999       │ jenkins │ v1.37.0 │ 19 Oct 25 17:04 UTC │ 19 Oct 25 17:04 UTC │
	│ delete  │ -p disable-driver-mounts-398463                                                                                                                                                                                                                     │ disable-driver-mounts-398463 │ jenkins │ v1.37.0 │ 19 Oct 25 17:04 UTC │ 19 Oct 25 17:04 UTC │
	│ start   │ -p default-k8s-diff-port-884246 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-884246 │ jenkins │ v1.37.0 │ 19 Oct 25 17:04 UTC │                     │
	│ image   │ embed-certs-493778 image list --format=json                                                                                                                                                                                                         │ embed-certs-493778           │ jenkins │ v1.37.0 │ 19 Oct 25 17:05 UTC │ 19 Oct 25 17:05 UTC │
	│ pause   │ -p embed-certs-493778 --alsologtostderr -v=1                                                                                                                                                                                                        │ embed-certs-493778           │ jenkins │ v1.37.0 │ 19 Oct 25 17:05 UTC │ 19 Oct 25 17:05 UTC │
	│ unpause │ -p embed-certs-493778 --alsologtostderr -v=1                                                                                                                                                                                                        │ embed-certs-493778           │ jenkins │ v1.37.0 │ 19 Oct 25 17:05 UTC │ 19 Oct 25 17:05 UTC │
	│ delete  │ -p embed-certs-493778                                                                                                                                                                                                                               │ embed-certs-493778           │ jenkins │ v1.37.0 │ 19 Oct 25 17:05 UTC │ 19 Oct 25 17:05 UTC │
	│ delete  │ -p embed-certs-493778                                                                                                                                                                                                                               │ embed-certs-493778           │ jenkins │ v1.37.0 │ 19 Oct 25 17:05 UTC │ 19 Oct 25 17:05 UTC │
	│ start   │ -p newest-cni-104383 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ newest-cni-104383            │ jenkins │ v1.37.0 │ 19 Oct 25 17:05 UTC │                     │
	│ image   │ no-preload-189367 image list --format=json                                                                                                                                                                                                          │ no-preload-189367            │ jenkins │ v1.37.0 │ 19 Oct 25 17:05 UTC │ 19 Oct 25 17:05 UTC │
	│ pause   │ -p no-preload-189367 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-189367            │ jenkins │ v1.37.0 │ 19 Oct 25 17:05 UTC │ 19 Oct 25 17:05 UTC │
	│ unpause │ -p no-preload-189367 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-189367            │ jenkins │ v1.37.0 │ 19 Oct 25 17:05 UTC │ 19 Oct 25 17:05 UTC │
	│ delete  │ -p no-preload-189367                                                                                                                                                                                                                                │ no-preload-189367            │ jenkins │ v1.37.0 │ 19 Oct 25 17:05 UTC │ 19 Oct 25 17:05 UTC │
	│ delete  │ -p no-preload-189367                                                                                                                                                                                                                                │ no-preload-189367            │ jenkins │ v1.37.0 │ 19 Oct 25 17:05 UTC │ 19 Oct 25 17:05 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 17:05:06
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 17:05:06.269709  338441 out.go:360] Setting OutFile to fd 1 ...
	I1019 17:05:06.269810  338441 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:05:06.269822  338441 out.go:374] Setting ErrFile to fd 2...
	I1019 17:05:06.269829  338441 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:05:06.270094  338441 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-3708/.minikube/bin
	I1019 17:05:06.270640  338441 out.go:368] Setting JSON to false
	I1019 17:05:06.271923  338441 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":2848,"bootTime":1760890658,"procs":316,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1019 17:05:06.272013  338441 start.go:143] virtualization: kvm guest
	I1019 17:05:06.273976  338441 out.go:179] * [newest-cni-104383] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1019 17:05:06.275882  338441 notify.go:221] Checking for updates...
	I1019 17:05:06.275926  338441 out.go:179]   - MINIKUBE_LOCATION=21683
	I1019 17:05:06.277224  338441 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 17:05:06.278718  338441 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-3708/kubeconfig
	I1019 17:05:06.280175  338441 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-3708/.minikube
	I1019 17:05:06.281567  338441 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1019 17:05:06.283091  338441 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 17:05:06.284762  338441 config.go:182] Loaded profile config "default-k8s-diff-port-884246": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1019 17:05:06.284859  338441 config.go:182] Loaded profile config "kubernetes-upgrade-769998": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1019 17:05:06.284958  338441 config.go:182] Loaded profile config "no-preload-189367": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1019 17:05:06.285070  338441 driver.go:422] Setting default libvirt URI to qemu:///system
	I1019 17:05:06.310825  338441 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1019 17:05:06.310952  338441 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:05:06.373885  338441 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-19 17:05:06.362451118 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 17:05:06.374021  338441 docker.go:319] overlay module found
	I1019 17:05:06.376090  338441 out.go:179] * Using the docker driver based on user configuration
	I1019 17:05:06.377540  338441 start.go:309] selected driver: docker
	I1019 17:05:06.377556  338441 start.go:930] validating driver "docker" against <nil>
	I1019 17:05:06.377570  338441 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 17:05:06.378161  338441 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 17:05:06.441024  338441 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-19 17:05:06.429957942 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 17:05:06.441226  338441 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1019 17:05:06.441269  338441 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1019 17:05:06.441502  338441 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1019 17:05:06.444167  338441 out.go:179] * Using Docker driver with root privileges
	I1019 17:05:06.445606  338441 cni.go:84] Creating CNI manager for ""
	I1019 17:05:06.445677  338441 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1019 17:05:06.445688  338441 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1019 17:05:06.445761  338441 start.go:353] cluster config:
	{Name:newest-cni-104383 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-104383 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:05:06.447259  338441 out.go:179] * Starting "newest-cni-104383" primary control-plane node in "newest-cni-104383" cluster
	I1019 17:05:06.448634  338441 cache.go:124] Beginning downloading kic base image for docker with containerd
	I1019 17:05:06.450161  338441 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1019 17:05:06.451691  338441 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1019 17:05:06.451731  338441 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 17:05:06.451773  338441 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-3708/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1019 17:05:06.451788  338441 cache.go:59] Caching tarball of preloaded images
	I1019 17:05:06.451928  338441 preload.go:233] Found /home/jenkins/minikube-integration/21683-3708/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1019 17:05:06.451943  338441 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1019 17:05:06.452103  338441 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/newest-cni-104383/config.json ...
	I1019 17:05:06.452129  338441 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/newest-cni-104383/config.json: {Name:mk5d86cb0b2b27adefaceb3fcf0ef401eb437b65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:05:06.474638  338441 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1019 17:05:06.474663  338441 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1019 17:05:06.474681  338441 cache.go:233] Successfully downloaded all kic artifacts
	I1019 17:05:06.474709  338441 start.go:360] acquireMachinesLock for newest-cni-104383: {Name:mk38c50012e6639c7997795c56831f74cc8b7ed7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 17:05:06.474818  338441 start.go:364] duration metric: took 90.179µs to acquireMachinesLock for "newest-cni-104383"
	I1019 17:05:06.474849  338441 start.go:93] Provisioning new machine with config: &{Name:newest-cni-104383 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-104383 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1019 17:05:06.474929  338441 start.go:125] createHost starting for "" (driver="docker")
	I1019 17:05:03.392734  333977 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.00238358s
	I1019 17:05:03.396322  333977 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1019 17:05:03.396404  333977 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8444/livez
	I1019 17:05:03.396474  333977 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1019 17:05:03.396622  333977 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1019 17:05:05.047278  333977 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.649593114s
	I1019 17:05:06.023162  333977 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.626702881s
	I1019 17:05:07.898207  333977 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.501697727s
	I1019 17:05:07.911148  333977 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1019 17:05:07.928887  333977 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1019 17:05:07.941847  333977 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1019 17:05:07.942203  333977 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-884246 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1019 17:05:07.953247  333977 kubeadm.go:319] [bootstrap-token] Using token: az2n8g.111oosrs2ygc0qx9
	I1019 17:05:07.955087  333977 out.go:252]   - Configuring RBAC rules ...
	I1019 17:05:07.955223  333977 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1019 17:05:07.961665  333977 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1019 17:05:07.969317  333977 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1019 17:05:07.974527  333977 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1019 17:05:07.978612  333977 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1019 17:05:07.982415  333977 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1019 17:05:08.306628  333977 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1019 17:05:08.727393  333977 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1019 17:05:09.304415  333977 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1019 17:05:09.306131  333977 kubeadm.go:319] 
	I1019 17:05:09.306222  333977 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1019 17:05:09.306234  333977 kubeadm.go:319] 
	I1019 17:05:09.306375  333977 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1019 17:05:09.306384  333977 kubeadm.go:319] 
	I1019 17:05:09.306436  333977 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1019 17:05:09.306599  333977 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1019 17:05:09.306693  333977 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1019 17:05:09.306703  333977 kubeadm.go:319] 
	I1019 17:05:09.306779  333977 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1019 17:05:09.306787  333977 kubeadm.go:319] 
	I1019 17:05:09.306853  333977 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1019 17:05:09.306877  333977 kubeadm.go:319] 
	I1019 17:05:09.306960  333977 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1019 17:05:09.307090  333977 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1019 17:05:09.307195  333977 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1019 17:05:09.307207  333977 kubeadm.go:319] 
	I1019 17:05:09.307325  333977 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1019 17:05:09.307475  333977 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1019 17:05:09.307497  333977 kubeadm.go:319] 
	I1019 17:05:09.307625  333977 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token az2n8g.111oosrs2ygc0qx9 \
	I1019 17:05:09.307781  333977 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:296b8c8988958d24145b86d3181131a0fd7c310c0b390c1bb51aedcaaaabcac4 \
	I1019 17:05:09.307825  333977 kubeadm.go:319] 	--control-plane 
	I1019 17:05:09.307835  333977 kubeadm.go:319] 
	I1019 17:05:09.307959  333977 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1019 17:05:09.307969  333977 kubeadm.go:319] 
	I1019 17:05:09.308119  333977 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token az2n8g.111oosrs2ygc0qx9 \
	I1019 17:05:09.308273  333977 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:296b8c8988958d24145b86d3181131a0fd7c310c0b390c1bb51aedcaaaabcac4 
	I1019 17:05:09.310799  333977 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1019 17:05:09.310979  333977 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1019 17:05:09.311008  333977 cni.go:84] Creating CNI manager for ""
	I1019 17:05:09.311019  333977 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1019 17:05:09.314832  333977 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1019 17:05:06.477913  338441 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1019 17:05:06.478183  338441 start.go:159] libmachine.API.Create for "newest-cni-104383" (driver="docker")
	I1019 17:05:06.478244  338441 client.go:171] LocalClient.Create starting
	I1019 17:05:06.478365  338441 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-3708/.minikube/certs/ca.pem
	I1019 17:05:06.478414  338441 main.go:143] libmachine: Decoding PEM data...
	I1019 17:05:06.478433  338441 main.go:143] libmachine: Parsing certificate...
	I1019 17:05:06.478507  338441 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-3708/.minikube/certs/cert.pem
	I1019 17:05:06.478531  338441 main.go:143] libmachine: Decoding PEM data...
	I1019 17:05:06.478544  338441 main.go:143] libmachine: Parsing certificate...
	I1019 17:05:06.478898  338441 cli_runner.go:164] Run: docker network inspect newest-cni-104383 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1019 17:05:06.498058  338441 cli_runner.go:211] docker network inspect newest-cni-104383 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1019 17:05:06.498143  338441 network_create.go:284] running [docker network inspect newest-cni-104383] to gather additional debugging logs...
	I1019 17:05:06.498165  338441 cli_runner.go:164] Run: docker network inspect newest-cni-104383
	W1019 17:05:06.516230  338441 cli_runner.go:211] docker network inspect newest-cni-104383 returned with exit code 1
	I1019 17:05:06.516286  338441 network_create.go:287] error running [docker network inspect newest-cni-104383]: docker network inspect newest-cni-104383: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-104383 not found
	I1019 17:05:06.516316  338441 network_create.go:289] output of [docker network inspect newest-cni-104383]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-104383 not found
	
	** /stderr **
	I1019 17:05:06.516495  338441 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 17:05:06.536427  338441 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-006c23c4183a IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:5e:c9:92:a9:c4:7e} reservation:<nil>}
	I1019 17:05:06.537182  338441 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-445939d9af99 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:8a:ed:3d:27:15:da} reservation:<nil>}
	I1019 17:05:06.537945  338441 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-900bf1299122 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:3a:16:c9:4d:3d:3c} reservation:<nil>}
	I1019 17:05:06.538627  338441 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-569460d60ba5 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:aa:e7:a3:18:1a:3a} reservation:<nil>}
	I1019 17:05:06.539131  338441 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-f46701fa1277 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:d2:be:b2:17:e7:72} reservation:<nil>}
	I1019 17:05:06.539891  338441 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-2876cbf59fac IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:2e:fe:ed:88:5b:f8} reservation:<nil>}
	I1019 17:05:06.540830  338441 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ff3580}
	I1019 17:05:06.540862  338441 network_create.go:124] attempt to create docker network newest-cni-104383 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1019 17:05:06.540910  338441 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-104383 newest-cni-104383
	I1019 17:05:06.606497  338441 network_create.go:108] docker network newest-cni-104383 192.168.103.0/24 created
	I1019 17:05:06.606532  338441 kic.go:121] calculated static IP "192.168.103.2" for the "newest-cni-104383" container
	I1019 17:05:06.606625  338441 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1019 17:05:06.626670  338441 cli_runner.go:164] Run: docker volume create newest-cni-104383 --label name.minikube.sigs.k8s.io=newest-cni-104383 --label created_by.minikube.sigs.k8s.io=true
	I1019 17:05:06.646771  338441 oci.go:103] Successfully created a docker volume newest-cni-104383
	I1019 17:05:06.646843  338441 cli_runner.go:164] Run: docker run --rm --name newest-cni-104383-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-104383 --entrypoint /usr/bin/test -v newest-cni-104383:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1019 17:05:07.045303  338441 oci.go:107] Successfully prepared a docker volume newest-cni-104383
	I1019 17:05:07.045383  338441 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1019 17:05:07.045407  338441 kic.go:194] Starting extracting preloaded images to volume ...
	I1019 17:05:07.045497  338441 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-3708/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-104383:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1019 17:05:09.316203  333977 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1019 17:05:09.320732  333977 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1019 17:05:09.320750  333977 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1019 17:05:09.335185  333977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1019 17:05:09.590381  333977 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1019 17:05:09.590593  333977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:05:09.590677  333977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-884246 minikube.k8s.io/updated_at=2025_10_19T17_05_09_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e20882874ea1ac33516421f13ca0f7def6fb6b34 minikube.k8s.io/name=default-k8s-diff-port-884246 minikube.k8s.io/primary=true
	I1019 17:05:09.602406  333977 ops.go:34] apiserver oom_adj: -16
	I1019 17:05:09.759125  333977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:05:10.259878  333977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:05:10.759190  333977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:05:11.259836  333977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:05:11.760074  333977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:05:12.259780  333977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:05:12.759161  333977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:05:13.259682  333977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:05:13.759192  333977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:05:14.259175  333977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 17:05:14.351316  333977 kubeadm.go:1114] duration metric: took 4.760773184s to wait for elevateKubeSystemPrivileges
	I1019 17:05:14.351356  333977 kubeadm.go:403] duration metric: took 15.857814527s to StartCluster
	I1019 17:05:14.351379  333977 settings.go:142] acquiring lock: {Name:mk9f6d8488524ed11aac6e6756fd16e3b7b486fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:05:14.351443  333977 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-3708/kubeconfig
	I1019 17:05:14.353272  333977 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-3708/kubeconfig: {Name:mk8f9aa104a9030ac2c43bdf10909ff66220bc19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:05:14.353739  333977 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1019 17:05:14.353926  333977 config.go:182] Loaded profile config "default-k8s-diff-port-884246": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1019 17:05:14.354123  333977 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1019 17:05:14.354149  333977 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1019 17:05:14.354274  333977 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-884246"
	I1019 17:05:14.354364  333977 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-884246"
	I1019 17:05:14.354395  333977 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-884246"
	I1019 17:05:14.354370  333977 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-884246"
	I1019 17:05:14.354578  333977 host.go:66] Checking if "default-k8s-diff-port-884246" exists ...
	I1019 17:05:14.354730  333977 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-884246 --format={{.State.Status}}
	I1019 17:05:14.355105  333977 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-884246 --format={{.State.Status}}
	I1019 17:05:14.356412  333977 out.go:179] * Verifying Kubernetes components...
	I1019 17:05:14.357875  333977 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:05:14.385936  333977 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 17:05:14.388278  333977 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-884246"
	I1019 17:05:14.388596  333977 host.go:66] Checking if "default-k8s-diff-port-884246" exists ...
	I1019 17:05:14.389108  333977 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-884246 --format={{.State.Status}}
	I1019 17:05:14.390816  333977 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 17:05:14.390837  333977 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1019 17:05:14.390892  333977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-884246
	I1019 17:05:14.429006  333977 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1019 17:05:14.429215  333977 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1019 17:05:14.429435  333977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-884246
	I1019 17:05:14.430060  333977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21683-3708/.minikube/machines/default-k8s-diff-port-884246/id_rsa Username:docker}
	I1019 17:05:14.459473  333977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21683-3708/.minikube/machines/default-k8s-diff-port-884246/id_rsa Username:docker}
	I1019 17:05:14.479397  333977 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1019 17:05:14.547225  333977 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 17:05:14.563365  333977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 17:05:14.588940  333977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1019 17:05:14.721209  333977 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1019 17:05:14.722606  333977 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-884246" to be "Ready" ...
	I1019 17:05:14.947663  333977 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1019 17:05:11.819364  338441 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-3708/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-104383:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.773776835s)
	I1019 17:05:11.819398  338441 kic.go:203] duration metric: took 4.773989117s to extract preloaded images to volume ...
	W1019 17:05:11.819508  338441 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1019 17:05:11.819548  338441 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1019 17:05:11.819604  338441 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1019 17:05:11.880069  338441 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-104383 --name newest-cni-104383 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-104383 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-104383 --network newest-cni-104383 --ip 192.168.103.2 --volume newest-cni-104383:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1019 17:05:12.194752  338441 cli_runner.go:164] Run: docker container inspect newest-cni-104383 --format={{.State.Running}}
	I1019 17:05:12.215852  338441 cli_runner.go:164] Run: docker container inspect newest-cni-104383 --format={{.State.Status}}
	I1019 17:05:12.236487  338441 cli_runner.go:164] Run: docker exec newest-cni-104383 stat /var/lib/dpkg/alternatives/iptables
	I1019 17:05:12.285231  338441 oci.go:144] the created container "newest-cni-104383" has a running status.
	I1019 17:05:12.285268  338441 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21683-3708/.minikube/machines/newest-cni-104383/id_rsa...
	I1019 17:05:12.473874  338441 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21683-3708/.minikube/machines/newest-cni-104383/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1019 17:05:12.501578  338441 cli_runner.go:164] Run: docker container inspect newest-cni-104383 --format={{.State.Status}}
	I1019 17:05:12.523748  338441 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1019 17:05:12.523774  338441 kic_runner.go:114] Args: [docker exec --privileged newest-cni-104383 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1019 17:05:12.576166  338441 cli_runner.go:164] Run: docker container inspect newest-cni-104383 --format={{.State.Status}}
	I1019 17:05:12.596403  338441 machine.go:94] provisionDockerMachine start ...
	I1019 17:05:12.596525  338441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-104383
	I1019 17:05:12.618624  338441 main.go:143] libmachine: Using SSH client type: native
	I1019 17:05:12.618946  338441 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1019 17:05:12.618963  338441 main.go:143] libmachine: About to run SSH command:
	hostname
	I1019 17:05:12.619714  338441 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52304->127.0.0.1:33128: read: connection reset by peer
	I1019 17:05:15.767750  338441 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-104383
	
	I1019 17:05:15.767778  338441 ubuntu.go:182] provisioning hostname "newest-cni-104383"
	I1019 17:05:15.767840  338441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-104383
	I1019 17:05:15.790567  338441 main.go:143] libmachine: Using SSH client type: native
	I1019 17:05:15.790861  338441 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1019 17:05:15.790884  338441 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-104383 && echo "newest-cni-104383" | sudo tee /etc/hostname
	I1019 17:05:15.944038  338441 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-104383
	
	I1019 17:05:15.944149  338441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-104383
	I1019 17:05:15.963225  338441 main.go:143] libmachine: Using SSH client type: native
	I1019 17:05:15.963456  338441 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1019 17:05:15.963492  338441 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-104383' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-104383/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-104383' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 17:05:16.097945  338441 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1019 17:05:16.097980  338441 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-3708/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-3708/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-3708/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-3708/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-3708/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-3708/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-3708/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-3708/.minikube}
	I1019 17:05:16.098037  338441 ubuntu.go:190] setting up certificates
	I1019 17:05:16.098071  338441 provision.go:84] configureAuth start
	I1019 17:05:16.098158  338441 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-104383
	I1019 17:05:16.118401  338441 provision.go:143] copyHostCerts
	I1019 17:05:16.118472  338441 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-3708/.minikube/ca.pem, removing ...
	I1019 17:05:16.118484  338441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-3708/.minikube/ca.pem
	I1019 17:05:16.118545  338441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-3708/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-3708/.minikube/ca.pem (1082 bytes)
	I1019 17:05:16.118649  338441 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-3708/.minikube/cert.pem, removing ...
	I1019 17:05:16.118657  338441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-3708/.minikube/cert.pem
	I1019 17:05:16.118686  338441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-3708/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-3708/.minikube/cert.pem (1123 bytes)
	I1019 17:05:16.118757  338441 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-3708/.minikube/key.pem, removing ...
	I1019 17:05:16.118761  338441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-3708/.minikube/key.pem
	I1019 17:05:16.118789  338441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-3708/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-3708/.minikube/key.pem (1679 bytes)
	I1019 17:05:16.118860  338441 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-3708/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-3708/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-3708/.minikube/certs/ca-key.pem org=jenkins.newest-cni-104383 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-104383]
	I1019 17:05:14.948737  333977 addons.go:515] duration metric: took 594.583951ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1019 17:05:15.226037  333977 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-884246" context rescaled to 1 replicas
	W1019 17:05:16.725739  333977 node_ready.go:57] node "default-k8s-diff-port-884246" has "Ready":"False" status (will retry)
	I1019 17:05:16.303915  338441 provision.go:177] copyRemoteCerts
	I1019 17:05:16.303974  338441 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 17:05:16.304013  338441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-104383
	I1019 17:05:16.322853  338441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21683-3708/.minikube/machines/newest-cni-104383/id_rsa Username:docker}
	I1019 17:05:16.420924  338441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3708/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 17:05:16.444273  338441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3708/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1019 17:05:16.463391  338441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3708/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1019 17:05:16.482010  338441 provision.go:87] duration metric: took 383.924321ms to configureAuth
	I1019 17:05:16.482060  338441 ubuntu.go:206] setting minikube options for container-runtime
	I1019 17:05:16.482284  338441 config.go:182] Loaded profile config "newest-cni-104383": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1019 17:05:16.482305  338441 machine.go:97] duration metric: took 3.885879498s to provisionDockerMachine
	I1019 17:05:16.482315  338441 client.go:174] duration metric: took 10.004059342s to LocalClient.Create
	I1019 17:05:16.482339  338441 start.go:167] duration metric: took 10.004156741s to libmachine.API.Create "newest-cni-104383"
	I1019 17:05:16.482352  338441 start.go:293] postStartSetup for "newest-cni-104383" (driver="docker")
	I1019 17:05:16.482363  338441 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 17:05:16.482423  338441 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 17:05:16.482468  338441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-104383
	I1019 17:05:16.503010  338441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21683-3708/.minikube/machines/newest-cni-104383/id_rsa Username:docker}
	I1019 17:05:16.623547  338441 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 17:05:16.627467  338441 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 17:05:16.627503  338441 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1019 17:05:16.627517  338441 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-3708/.minikube/addons for local assets ...
	I1019 17:05:16.627573  338441 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-3708/.minikube/files for local assets ...
	I1019 17:05:16.627679  338441 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-3708/.minikube/files/etc/ssl/certs/72542.pem -> 72542.pem in /etc/ssl/certs
	I1019 17:05:16.627765  338441 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1019 17:05:16.637350  338441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-3708/.minikube/files/etc/ssl/certs/72542.pem --> /etc/ssl/certs/72542.pem (1708 bytes)
	I1019 17:05:16.659603  338441 start.go:296] duration metric: took 177.236678ms for postStartSetup
	I1019 17:05:16.660103  338441 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-104383
	I1019 17:05:16.682374  338441 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/newest-cni-104383/config.json ...
	I1019 17:05:16.682679  338441 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 17:05:16.682733  338441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-104383
	I1019 17:05:16.702421  338441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21683-3708/.minikube/machines/newest-cni-104383/id_rsa Username:docker}
	I1019 17:05:16.797222  338441 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1019 17:05:16.802628  338441 start.go:128] duration metric: took 10.327682787s to createHost
	I1019 17:05:16.802662  338441 start.go:83] releasing machines lock for "newest-cni-104383", held for 10.32782845s
	I1019 17:05:16.802748  338441 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-104383
	I1019 17:05:16.823470  338441 ssh_runner.go:195] Run: cat /version.json
	I1019 17:05:16.823528  338441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-104383
	I1019 17:05:16.823553  338441 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 17:05:16.823628  338441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-104383
	I1019 17:05:16.844102  338441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21683-3708/.minikube/machines/newest-cni-104383/id_rsa Username:docker}
	I1019 17:05:16.845548  338441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21683-3708/.minikube/machines/newest-cni-104383/id_rsa Username:docker}
	I1019 17:05:16.995177  338441 ssh_runner.go:195] Run: systemctl --version
	I1019 17:05:17.001926  338441 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 17:05:17.006658  338441 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 17:05:17.006730  338441 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 17:05:17.032554  338441 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1019 17:05:17.032577  338441 start.go:496] detecting cgroup driver to use...
	I1019 17:05:17.032611  338441 detect.go:190] detected "systemd" cgroup driver on host os
	I1019 17:05:17.032669  338441 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1019 17:05:17.047739  338441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1019 17:05:17.060911  338441 docker.go:218] disabling cri-docker service (if available) ...
	I1019 17:05:17.060968  338441 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 17:05:17.077633  338441 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 17:05:17.095116  338441 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 17:05:17.183826  338441 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 17:05:17.268747  338441 docker.go:234] disabling docker service ...
	I1019 17:05:17.268820  338441 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 17:05:17.287322  338441 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 17:05:17.300545  338441 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 17:05:17.385095  338441 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 17:05:17.469459  338441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 17:05:17.482421  338441 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 17:05:17.497480  338441 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1019 17:05:17.508715  338441 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1019 17:05:17.518216  338441 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1019 17:05:17.518331  338441 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1019 17:05:17.527680  338441 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1019 17:05:17.537328  338441 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1019 17:05:17.546799  338441 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1019 17:05:17.556455  338441 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 17:05:17.565108  338441 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1019 17:05:17.574592  338441 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1019 17:05:17.583560  338441 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1019 17:05:17.592910  338441 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 17:05:17.600928  338441 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 17:05:17.608881  338441 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:05:17.692802  338441 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1019 17:05:17.797120  338441 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1019 17:05:17.797194  338441 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1019 17:05:17.801555  338441 start.go:564] Will wait 60s for crictl version
	I1019 17:05:17.801616  338441 ssh_runner.go:195] Run: which crictl
	I1019 17:05:17.805597  338441 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1019 17:05:17.830856  338441 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.28
	RuntimeApiVersion:  v1
	I1019 17:05:17.830908  338441 ssh_runner.go:195] Run: containerd --version
	I1019 17:05:17.855301  338441 ssh_runner.go:195] Run: containerd --version
	I1019 17:05:17.882225  338441 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 1.7.28 ...
	I1019 17:05:17.883740  338441 cli_runner.go:164] Run: docker network inspect newest-cni-104383 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 17:05:17.902152  338441 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1019 17:05:17.906275  338441 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 17:05:17.918552  338441 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                 NAMESPACE
	ea0d4dd471ad5       c80c8dbafe7dd       About a minute ago   Running             kube-controller-manager   1                   741f282b1665e       kube-controller-manager-kubernetes-upgrade-769998   kube-system
	9a762c9684331       c3994bc696102       About a minute ago   Exited              kube-apiserver            7                   f8ea4c6658f8b       kube-apiserver-kubernetes-upgrade-769998            kube-system
	44666239e8102       c80c8dbafe7dd       5 minutes ago        Exited              kube-controller-manager   0                   741f282b1665e       kube-controller-manager-kubernetes-upgrade-769998   kube-system
	32ca50174c9bd       7dd6aaa1717ab       5 minutes ago        Running             kube-scheduler            0                   810ff30c89d98       kube-scheduler-kubernetes-upgrade-769998            kube-system
	f2207ec2cee55       5f1f5298c888d       6 minutes ago        Running             etcd                      0                   25c86b10e06a8       etcd-kubernetes-upgrade-769998                      kube-system
	
	
	==> containerd <==
	Oct 19 17:01:03 kubernetes-upgrade-769998 containerd[1912]: time="2025-10-19T17:01:03.791914712Z" level=info msg="received exit event container_id:\"de926debb0f5bf0ba4c6a2be06cc25c493266e40474c036a1eaf83bb991ae057\"  id:\"de926debb0f5bf0ba4c6a2be06cc25c493266e40474c036a1eaf83bb991ae057\"  pid:3235  exit_status:1  exited_at:{seconds:1760893263  nanos:791472763}"
	Oct 19 17:01:03 kubernetes-upgrade-769998 containerd[1912]: time="2025-10-19T17:01:03.819723570Z" level=info msg="shim disconnected" id=de926debb0f5bf0ba4c6a2be06cc25c493266e40474c036a1eaf83bb991ae057 namespace=k8s.io
	Oct 19 17:01:03 kubernetes-upgrade-769998 containerd[1912]: time="2025-10-19T17:01:03.819772840Z" level=warning msg="cleaning up after shim disconnected" id=de926debb0f5bf0ba4c6a2be06cc25c493266e40474c036a1eaf83bb991ae057 namespace=k8s.io
	Oct 19 17:01:03 kubernetes-upgrade-769998 containerd[1912]: time="2025-10-19T17:01:03.819783753Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Oct 19 17:01:04 kubernetes-upgrade-769998 containerd[1912]: time="2025-10-19T17:01:04.069237325Z" level=info msg="RemoveContainer for \"b7c89848034f7e3a7c6a445c16794bbbeb46adde2f2cce0d3e595a5fb4562fdd\""
	Oct 19 17:01:04 kubernetes-upgrade-769998 containerd[1912]: time="2025-10-19T17:01:04.074227519Z" level=info msg="RemoveContainer for \"b7c89848034f7e3a7c6a445c16794bbbeb46adde2f2cce0d3e595a5fb4562fdd\" returns successfully"
	Oct 19 17:03:21 kubernetes-upgrade-769998 containerd[1912]: time="2025-10-19T17:03:21.729274135Z" level=info msg="received exit event container_id:\"44666239e810278a93aa60b778515ab258a99a3897aa3261e67dbcc63e6a017c\"  id:\"44666239e810278a93aa60b778515ab258a99a3897aa3261e67dbcc63e6a017c\"  pid:3189  exit_status:1  exited_at:{seconds:1760893401  nanos:728801309}"
	Oct 19 17:03:22 kubernetes-upgrade-769998 containerd[1912]: time="2025-10-19T17:03:22.445557381Z" level=info msg="shim disconnected" id=44666239e810278a93aa60b778515ab258a99a3897aa3261e67dbcc63e6a017c namespace=k8s.io
	Oct 19 17:03:22 kubernetes-upgrade-769998 containerd[1912]: time="2025-10-19T17:03:22.445651819Z" level=warning msg="cleaning up after shim disconnected" id=44666239e810278a93aa60b778515ab258a99a3897aa3261e67dbcc63e6a017c namespace=k8s.io
	Oct 19 17:03:22 kubernetes-upgrade-769998 containerd[1912]: time="2025-10-19T17:03:22.445669631Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Oct 19 17:03:52 kubernetes-upgrade-769998 containerd[1912]: time="2025-10-19T17:03:52.604745481Z" level=info msg="CreateContainer within sandbox \"f8ea4c6658f8be135dbe5d88ccf80c5a7627a210a1470d0abf6e16ab17576e1d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:7,}"
	Oct 19 17:03:52 kubernetes-upgrade-769998 containerd[1912]: time="2025-10-19T17:03:52.616413714Z" level=info msg="CreateContainer within sandbox \"f8ea4c6658f8be135dbe5d88ccf80c5a7627a210a1470d0abf6e16ab17576e1d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:7,} returns container id \"9a762c9684331406443129f59b42f186078a61908f51c4667b54e8767f4d9818\""
	Oct 19 17:03:52 kubernetes-upgrade-769998 containerd[1912]: time="2025-10-19T17:03:52.616987186Z" level=info msg="StartContainer for \"9a762c9684331406443129f59b42f186078a61908f51c4667b54e8767f4d9818\""
	Oct 19 17:03:52 kubernetes-upgrade-769998 containerd[1912]: time="2025-10-19T17:03:52.696142887Z" level=info msg="StartContainer for \"9a762c9684331406443129f59b42f186078a61908f51c4667b54e8767f4d9818\" returns successfully"
	Oct 19 17:03:52 kubernetes-upgrade-769998 containerd[1912]: time="2025-10-19T17:03:52.746348295Z" level=info msg="received exit event container_id:\"9a762c9684331406443129f59b42f186078a61908f51c4667b54e8767f4d9818\"  id:\"9a762c9684331406443129f59b42f186078a61908f51c4667b54e8767f4d9818\"  pid:3425  exit_status:1  exited_at:{seconds:1760893432  nanos:745901960}"
	Oct 19 17:03:52 kubernetes-upgrade-769998 containerd[1912]: time="2025-10-19T17:03:52.767443346Z" level=info msg="shim disconnected" id=9a762c9684331406443129f59b42f186078a61908f51c4667b54e8767f4d9818 namespace=k8s.io
	Oct 19 17:03:52 kubernetes-upgrade-769998 containerd[1912]: time="2025-10-19T17:03:52.767526904Z" level=warning msg="cleaning up after shim disconnected" id=9a762c9684331406443129f59b42f186078a61908f51c4667b54e8767f4d9818 namespace=k8s.io
	Oct 19 17:03:52 kubernetes-upgrade-769998 containerd[1912]: time="2025-10-19T17:03:52.767604774Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Oct 19 17:03:53 kubernetes-upgrade-769998 containerd[1912]: time="2025-10-19T17:03:53.410124095Z" level=info msg="RemoveContainer for \"de926debb0f5bf0ba4c6a2be06cc25c493266e40474c036a1eaf83bb991ae057\""
	Oct 19 17:03:53 kubernetes-upgrade-769998 containerd[1912]: time="2025-10-19T17:03:53.413786004Z" level=info msg="RemoveContainer for \"de926debb0f5bf0ba4c6a2be06cc25c493266e40474c036a1eaf83bb991ae057\" returns successfully"
	Oct 19 17:03:59 kubernetes-upgrade-769998 containerd[1912]: time="2025-10-19T17:03:59.354279875Z" level=info msg="CreateContainer within sandbox \"741f282b1665e839fc7f31f1be611458dc7915daa8921d805501f632cf11dbb7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}"
	Oct 19 17:03:59 kubernetes-upgrade-769998 containerd[1912]: time="2025-10-19T17:03:59.364861779Z" level=info msg="CreateContainer within sandbox \"741f282b1665e839fc7f31f1be611458dc7915daa8921d805501f632cf11dbb7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"ea0d4dd471ad5463d0af8c607b1e08a98ea2ae00e5eb939e8e51f27dd9458d3e\""
	Oct 19 17:03:59 kubernetes-upgrade-769998 containerd[1912]: time="2025-10-19T17:03:59.365401948Z" level=info msg="StartContainer for \"ea0d4dd471ad5463d0af8c607b1e08a98ea2ae00e5eb939e8e51f27dd9458d3e\""
	Oct 19 17:03:59 kubernetes-upgrade-769998 containerd[1912]: time="2025-10-19T17:03:59.443457557Z" level=info msg="StartContainer for \"ea0d4dd471ad5463d0af8c607b1e08a98ea2ae00e5eb939e8e51f27dd9458d3e\" returns successfully"
	Oct 19 17:04:47 kubernetes-upgrade-769998 containerd[1912]: time="2025-10-19T17:04:47.245793626Z" level=error msg="ContainerStatus for \"c9af4fc18c6ae8055f064d6d24af4ca96afa58531b99e39e7be650c36b15fc05\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c9af4fc18c6ae8055f064d6d24af4ca96afa58531b99e39e7be650c36b15fc05\": not found"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 4e 9d ac 49 5c 11 08 06
	[  +0.000391] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff e6 83 a5 c9 5e 5c 08 06
	[Oct19 17:01] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 2e dc 14 a7 e1 b9 08 06
	[  +0.018238] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 92 94 76 48 85 25 08 06
	[ +18.105301] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000019] ll header: 00000000: ff ff ff ff ff ff 1a 45 38 93 a2 59 08 06
	[  +0.001236] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 7e 33 0f 5a e0 a7 08 06
	[  +7.615349] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 76 71 f0 2c 19 a3 08 06
	[Oct19 17:02] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 92 00 ad df d0 a4 08 06
	[  +0.000382] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 7e 33 0f 5a e0 a7 08 06
	[ +16.288058] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 02 b2 14 19 0a 77 08 06
	[  +0.000406] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 2e dc 14 a7 e1 b9 08 06
	[  +7.293969] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 16 60 22 05 7c 07 08 06
	[  +0.000506] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 76 71 f0 2c 19 a3 08 06
	
	
	==> etcd [f2207ec2cee55c2c277a7c846fd8590213a107d1f5f032e7686d762de8cc3a03] <==
	{"level":"info","ts":"2025-10-19T16:58:25.477194Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"9f0758e1c58a86ed is starting a new election at term 3"}
	{"level":"info","ts":"2025-10-19T16:58:25.477250Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"9f0758e1c58a86ed became pre-candidate at term 3"}
	{"level":"info","ts":"2025-10-19T16:58:25.477353Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-19T16:58:25.477384Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-10-19T16:58:25.477408Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"9f0758e1c58a86ed became candidate at term 4"}
	{"level":"info","ts":"2025-10-19T16:58:25.478090Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 4"}
	{"level":"info","ts":"2025-10-19T16:58:25.478125Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-10-19T16:58:25.478142Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"9f0758e1c58a86ed became leader at term 4"}
	{"level":"info","ts":"2025-10-19T16:58:25.478150Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 4"}
	{"level":"info","ts":"2025-10-19T16:58:25.478971Z","caller":"etcdserver/server.go:1804","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:kubernetes-upgrade-769998 ClientURLs:[https://192.168.85.2:2379]}","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-19T16:58:25.479006Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-19T16:58:25.479082Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-19T16:58:25.479129Z","caller":"etcdserver/server.go:2409","msg":"updating cluster version using v3 API","from":"3.5","to":"3.6"}
	{"level":"info","ts":"2025-10-19T16:58:25.479652Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-19T16:58:25.479866Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-19T16:58:25.480518Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","from":"3.5","to":"3.6"}
	{"level":"info","ts":"2025-10-19T16:58:25.480825Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2025-10-19T16:58:25.480881Z","caller":"etcdserver/server.go:2424","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2025-10-19T16:58:25.480985Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2025-10-19T16:58:25.481497Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2025-10-19T16:58:25.481728Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"warn","ts":"2025-10-19T16:58:25.481919Z","caller":"v3rpc/grpc.go:52","msg":"etcdserver: failed to register grpc metrics","error":"duplicate metrics collector registration attempted"}
	{"level":"info","ts":"2025-10-19T16:58:25.482010Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-19T16:58:25.485380Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-19T16:58:25.485852Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	
	
	==> kernel <==
	 17:06:19 up 48 min,  0 user,  load average: 2.56, 3.53, 2.67
	Linux kubernetes-upgrade-769998 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [9a762c9684331406443129f59b42f186078a61908f51c4667b54e8767f4d9818] <==
	I1019 17:03:52.737572       1 options.go:263] external host was not specified, using 192.168.85.2
	I1019 17:03:52.741492       1 server.go:150] Version: v1.34.1
	I1019 17:03:52.741531       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E1019 17:03:52.741860       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8443: listen tcp 0.0.0.0:8443: bind: address already in use"
	
	
	==> kube-controller-manager [44666239e810278a93aa60b778515ab258a99a3897aa3261e67dbcc63e6a017c] <==
	I1019 17:00:19.624721       1 serving.go:386] Generated self-signed cert in-memory
	I1019 17:00:20.703517       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1019 17:00:20.703547       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 17:00:20.705755       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1019 17:00:20.705755       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1019 17:00:20.706251       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1019 17:00:20.706378       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1019 17:03:21.721128       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: the server was unable to return a response in the time allotted, but may still be processing the request"
	
	
	==> kube-controller-manager [ea0d4dd471ad5463d0af8c607b1e08a98ea2ae00e5eb939e8e51f27dd9458d3e] <==
	I1019 17:04:00.579274       1 serving.go:386] Generated self-signed cert in-memory
	I1019 17:04:00.938644       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1019 17:04:00.938669       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 17:04:00.940290       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1019 17:04:00.940333       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1019 17:04:00.940713       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1019 17:04:00.940846       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	
	
	==> kube-scheduler [32ca50174c9bd50a2b511f78610865a621358eebc785693f10178e9f68907144] <==
	I1019 17:00:14.511698       1 serving.go:386] Generated self-signed cert in-memory
	W1019 17:01:15.114318       1 authentication.go:397] Error looking up in-cluster authentication configuration: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps extension-apiserver-authentication)
	W1019 17:01:15.114357       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1019 17:01:15.114369       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1019 17:01:15.134346       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1019 17:01:15.134472       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 17:01:15.137915       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1019 17:01:15.138141       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 17:01:15.138209       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 17:01:15.138140       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1019 17:01:15.238564       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1019 17:01:49.241834       1 event_broadcaster.go:270] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{storage-provisioner.186ff31aae009d2a  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},EventTime:2025-10-19 17:01:15.239115127 +0000 UTC m=+61.340641938,Series:nil,ReportingController:default-scheduler,ReportingInstance:default-scheduler-kubernetes-upgrade-769998,Action:Scheduling,Reason:FailedScheduling,Regarding:{Pod kube-system storage-provisioner e98ea015-19c8-407c-a90f-0cfdb82abd71 v1 323 },Related:nil,Note:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.,Type:Warning,DeprecatedSource:{ },DeprecatedFirstTimestamp:0001-01-01 00:00:00 +0000 UTC,DeprecatedLastTimestamp:0001-01-01 00:00:00 +0000 UTC,Deprec
atedCount:0,}"
	E1019 17:01:49.244140       1 pod_status_patch.go:111] "Failed to patch pod status" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="kube-system/storage-provisioner"
	
	
	==> kubelet <==
	Oct 19 17:05:38 kubernetes-upgrade-769998 kubelet[1158]: E1019 17:05:38.644944    1158 status_manager.go:1018] "Failed to get status for pod" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-controller-manager-kubernetes-upgrade-769998)" podUID="bd57a2b30cdbc745b7af06fa8748f82a" pod="kube-system/kube-controller-manager-kubernetes-upgrade-769998"
	Oct 19 17:05:38 kubernetes-upgrade-769998 kubelet[1158]: E1019 17:05:38.752835    1158 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Oct 19 17:05:43 kubernetes-upgrade-769998 kubelet[1158]: E1019 17:05:43.754568    1158 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Oct 19 17:05:48 kubernetes-upgrade-769998 kubelet[1158]: E1019 17:05:48.241166    1158 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"kubernetes-upgrade-769998\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-769998?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Oct 19 17:05:48 kubernetes-upgrade-769998 kubelet[1158]: E1019 17:05:48.755646    1158 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Oct 19 17:05:49 kubernetes-upgrade-769998 kubelet[1158]: I1019 17:05:49.602797    1158 scope.go:117] "RemoveContainer" containerID="9a762c9684331406443129f59b42f186078a61908f51c4667b54e8767f4d9818"
	Oct 19 17:05:49 kubernetes-upgrade-769998 kubelet[1158]: E1019 17:05:49.602961    1158 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-apiserver pod=kube-apiserver-kubernetes-upgrade-769998_kube-system(f691ac07c0d158d6f3fac42c7b4d1b4a)\"" pod="kube-system/kube-apiserver-kubernetes-upgrade-769998" podUID="f691ac07c0d158d6f3fac42c7b4d1b4a"
	Oct 19 17:05:53 kubernetes-upgrade-769998 kubelet[1158]: E1019 17:05:53.757123    1158 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Oct 19 17:05:54 kubernetes-upgrade-769998 kubelet[1158]: E1019 17:05:54.965961    1158 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-769998?timeout=10s\": context deadline exceeded" interval="7s"
	Oct 19 17:05:58 kubernetes-upgrade-769998 kubelet[1158]: E1019 17:05:58.241397    1158 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"kubernetes-upgrade-769998\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-769998?timeout=10s\": context deadline exceeded"
	Oct 19 17:05:58 kubernetes-upgrade-769998 kubelet[1158]: E1019 17:05:58.241929    1158 kubelet_node_status.go:473] "Unable to update node status" err="update node status exceeds retry count"
	Oct 19 17:05:58 kubernetes-upgrade-769998 kubelet[1158]: E1019 17:05:58.604816    1158 mirror_client.go:139] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="kube-system/kube-scheduler-kubernetes-upgrade-769998"
	Oct 19 17:05:58 kubernetes-upgrade-769998 kubelet[1158]: E1019 17:05:58.758751    1158 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Oct 19 17:06:03 kubernetes-upgrade-769998 kubelet[1158]: I1019 17:06:03.603445    1158 scope.go:117] "RemoveContainer" containerID="9a762c9684331406443129f59b42f186078a61908f51c4667b54e8767f4d9818"
	Oct 19 17:06:03 kubernetes-upgrade-769998 kubelet[1158]: E1019 17:06:03.603660    1158 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-apiserver pod=kube-apiserver-kubernetes-upgrade-769998_kube-system(f691ac07c0d158d6f3fac42c7b4d1b4a)\"" pod="kube-system/kube-apiserver-kubernetes-upgrade-769998" podUID="f691ac07c0d158d6f3fac42c7b4d1b4a"
	Oct 19 17:06:03 kubernetes-upgrade-769998 kubelet[1158]: E1019 17:06:03.760361    1158 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Oct 19 17:06:08 kubernetes-upgrade-769998 kubelet[1158]: E1019 17:06:08.761831    1158 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Oct 19 17:06:09 kubernetes-upgrade-769998 kubelet[1158]: E1019 17:06:09.084933    1158 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{kubernetes-upgrade-769998.186ff2e5e926bb3a  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:kubernetes-upgrade-769998,UID:kubernetes-upgrade-769998,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node kubernetes-upgrade-769998 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:kubernetes-upgrade-769998,},FirstTimestamp:2025-10-19 16:57:28.598215482 +0000 UTC m=+0.079053744,LastTimestamp:2025-10-19 16:57:28.704872351 +0000 UTC m=+0.185710626,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:kubernetes-upgrade-769
998,}"
	Oct 19 17:06:11 kubernetes-upgrade-769998 kubelet[1158]: I1019 17:06:11.603271    1158 kubelet.go:3202] "Trying to delete pod" pod="kube-system/etcd-kubernetes-upgrade-769998" podUID="4a857b3e-45f2-492e-9283-428e92a40df9"
	Oct 19 17:06:11 kubernetes-upgrade-769998 kubelet[1158]: E1019 17:06:11.966899    1158 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-769998?timeout=10s\": context deadline exceeded" interval="7s"
	Oct 19 17:06:13 kubernetes-upgrade-769998 kubelet[1158]: E1019 17:06:13.763395    1158 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Oct 19 17:06:16 kubernetes-upgrade-769998 kubelet[1158]: I1019 17:06:16.603308    1158 scope.go:117] "RemoveContainer" containerID="9a762c9684331406443129f59b42f186078a61908f51c4667b54e8767f4d9818"
	Oct 19 17:06:16 kubernetes-upgrade-769998 kubelet[1158]: E1019 17:06:16.603485    1158 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-apiserver pod=kube-apiserver-kubernetes-upgrade-769998_kube-system(f691ac07c0d158d6f3fac42c7b4d1b4a)\"" pod="kube-system/kube-apiserver-kubernetes-upgrade-769998" podUID="f691ac07c0d158d6f3fac42c7b4d1b4a"
	Oct 19 17:06:18 kubernetes-upgrade-769998 kubelet[1158]: E1019 17:06:18.642199    1158 kubelet_node_status.go:486] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-19T17:06:08Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-19T17:06:08Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-19T17:06:08Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-19T17:06:08Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\\\",\\\"registry.k8s.io/etcd:3.6.4-0\\\"],\\\"sizeBytes\\\":74311308},{\\\"names\\\":[\\\"registry.k8s.io/kube-apiserver@sha256:
b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\\\",\\\"registry.k8s.io/kube-apiserver:v1.34.1\\\"],\\\"sizeBytes\\\":27061991},{\\\"names\\\":[\\\"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\\\",\\\"registry.k8s.io/kube-controller-manager:v1.34.1\\\"],\\\"sizeBytes\\\":22820214},{\\\"names\\\":[\\\"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\\\",\\\"registry.k8s.io/kube-scheduler:v1.34.1\\\"],\\\"sizeBytes\\\":17385568},{\\\"names\\\":[\\\"gcr.io/k8s-minikube/storage-provisioner:v5\\\"],\\\"sizeBytes\\\":9057171},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\\\",\\\"registry.k8s.io/pause:3.10.1\\\"],\\\"sizeBytes\\\":320448}]}}\" for node \"kubernetes-upgrade-769998\": Patch \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-769998/status?timeout=10s\": context deadline ex
ceeded"
	Oct 19 17:06:18 kubernetes-upgrade-769998 kubelet[1158]: E1019 17:06:18.765170    1158 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-769998 -n kubernetes-upgrade-769998
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-769998 -n kubernetes-upgrade-769998: exit status 2 (15.956902694s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "kubernetes-upgrade-769998" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-769998" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-769998
E1019 17:06:38.154433    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/kindnet-102079/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-769998: (2.686837426s)
--- FAIL: TestKubernetesUpgrade (589.30s)

                                                
                                    

Test pass (305/332)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 13.53
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.06
9 TestDownloadOnly/v1.28.0/DeleteAll 0.22
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.34.1/json-events 11.51
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.06
18 TestDownloadOnly/v1.34.1/DeleteAll 0.22
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.13
20 TestDownloadOnlyKic 0.38
21 TestBinaryMirror 0.8
22 TestOffline 55.94
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 161.31
29 TestAddons/serial/Volcano 39.14
31 TestAddons/serial/GCPAuth/Namespaces 0.12
32 TestAddons/serial/GCPAuth/FakeCredentials 10.45
35 TestAddons/parallel/Registry 15
36 TestAddons/parallel/RegistryCreds 0.64
37 TestAddons/parallel/Ingress 20.03
38 TestAddons/parallel/InspektorGadget 5.29
39 TestAddons/parallel/MetricsServer 5.65
41 TestAddons/parallel/CSI 50.81
42 TestAddons/parallel/Headlamp 16.56
43 TestAddons/parallel/CloudSpanner 5.48
44 TestAddons/parallel/LocalPath 56.61
45 TestAddons/parallel/NvidiaDevicePlugin 5.56
46 TestAddons/parallel/Yakd 11.65
47 TestAddons/parallel/AmdGpuDevicePlugin 5.51
48 TestAddons/StoppedEnableDisable 12.27
49 TestCertOptions 26.38
50 TestCertExpiration 214.96
52 TestForceSystemdFlag 39.02
53 TestForceSystemdEnv 34.42
54 TestDockerEnvContainerd 39.27
55 TestKVMDriverInstallOrUpdate 1.07
59 TestErrorSpam/setup 23.61
60 TestErrorSpam/start 0.66
61 TestErrorSpam/status 0.92
62 TestErrorSpam/pause 1.42
63 TestErrorSpam/unpause 1.48
64 TestErrorSpam/stop 2.07
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 39.23
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 6.21
71 TestFunctional/serial/KubeContext 0.05
72 TestFunctional/serial/KubectlGetPods 0.08
75 TestFunctional/serial/CacheCmd/cache/add_remote 2.78
76 TestFunctional/serial/CacheCmd/cache/add_local 1.91
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
78 TestFunctional/serial/CacheCmd/cache/list 0.05
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.28
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.52
81 TestFunctional/serial/CacheCmd/cache/delete 0.1
82 TestFunctional/serial/MinikubeKubectlCmd 0.11
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
84 TestFunctional/serial/ExtraConfig 47.98
85 TestFunctional/serial/ComponentHealth 0.07
86 TestFunctional/serial/LogsCmd 1.21
87 TestFunctional/serial/LogsFileCmd 1.29
88 TestFunctional/serial/InvalidService 4.21
90 TestFunctional/parallel/ConfigCmd 0.4
92 TestFunctional/parallel/DryRun 0.42
93 TestFunctional/parallel/InternationalLanguage 0.17
94 TestFunctional/parallel/StatusCmd 0.96
98 TestFunctional/parallel/ServiceCmdConnect 9.55
99 TestFunctional/parallel/AddonsCmd 0.14
100 TestFunctional/parallel/PersistentVolumeClaim 27.91
102 TestFunctional/parallel/SSHCmd 0.63
103 TestFunctional/parallel/CpCmd 1.56
104 TestFunctional/parallel/MySQL 366.56
105 TestFunctional/parallel/FileSync 0.27
106 TestFunctional/parallel/CertSync 1.6
110 TestFunctional/parallel/NodeLabels 0.07
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.59
114 TestFunctional/parallel/License 0.42
116 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.55
117 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 12.24
120 TestFunctional/parallel/Version/short 0.05
121 TestFunctional/parallel/Version/components 0.49
122 TestFunctional/parallel/ServiceCmd/DeployApp 10.17
123 TestFunctional/parallel/ProfileCmd/profile_not_create 0.42
124 TestFunctional/parallel/ServiceCmd/List 0.51
125 TestFunctional/parallel/ProfileCmd/profile_list 0.39
126 TestFunctional/parallel/ServiceCmd/JSONOutput 0.5
127 TestFunctional/parallel/ProfileCmd/profile_json_output 0.38
128 TestFunctional/parallel/ServiceCmd/HTTPS 0.33
129 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
130 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
131 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
132 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
133 TestFunctional/parallel/ImageCommands/ImageBuild 3.49
134 TestFunctional/parallel/ImageCommands/Setup 1.75
135 TestFunctional/parallel/ServiceCmd/Format 0.34
136 TestFunctional/parallel/ServiceCmd/URL 0.33
137 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
138 TestFunctional/parallel/MountCmd/any-port 8.98
139 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
143 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
144 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.18
145 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.05
146 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.91
147 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.37
148 TestFunctional/parallel/ImageCommands/ImageRemove 0.43
149 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.56
150 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.38
151 TestFunctional/parallel/UpdateContextCmd/no_changes 0.14
152 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.14
153 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.13
154 TestFunctional/parallel/MountCmd/specific-port 1.56
155 TestFunctional/parallel/MountCmd/VerifyCleanup 1.6
156 TestFunctional/delete_echo-server_images 0.04
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
163 TestMultiControlPlane/serial/StartCluster 113.8
164 TestMultiControlPlane/serial/DeployApp 5.23
165 TestMultiControlPlane/serial/PingHostFromPods 1.05
166 TestMultiControlPlane/serial/AddWorkerNode 24.98
167 TestMultiControlPlane/serial/NodeLabels 0.06
168 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.91
169 TestMultiControlPlane/serial/CopyFile 16.81
170 TestMultiControlPlane/serial/StopSecondaryNode 12.66
171 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.7
172 TestMultiControlPlane/serial/RestartSecondaryNode 9.01
173 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.9
174 TestMultiControlPlane/serial/RestartClusterKeepsNodes 90.13
175 TestMultiControlPlane/serial/DeleteSecondaryNode 9.15
176 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.68
177 TestMultiControlPlane/serial/StopCluster 36.03
178 TestMultiControlPlane/serial/RestartCluster 51.37
179 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.69
180 TestMultiControlPlane/serial/AddSecondaryNode 34.85
181 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.89
185 TestJSONOutput/start/Command 38.46
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Command 0.69
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Command 0.59
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.88
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.21
210 TestKicCustomNetwork/create_custom_network 34.2
211 TestKicCustomNetwork/use_default_bridge_network 22.96
212 TestKicExistingNetwork 23.77
213 TestKicCustomSubnet 26.19
214 TestKicStaticIP 25.55
215 TestMainNoArgs 0.05
216 TestMinikubeProfile 48.49
219 TestMountStart/serial/StartWithMountFirst 5.71
220 TestMountStart/serial/VerifyMountFirst 0.26
221 TestMountStart/serial/StartWithMountSecond 5.6
222 TestMountStart/serial/VerifyMountSecond 0.26
223 TestMountStart/serial/DeleteFirst 1.71
224 TestMountStart/serial/VerifyMountPostDelete 0.26
225 TestMountStart/serial/Stop 1.25
226 TestMountStart/serial/RestartStopped 7.6
227 TestMountStart/serial/VerifyMountPostStop 0.26
230 TestMultiNode/serial/FreshStart2Nodes 64.84
231 TestMultiNode/serial/DeployApp2Nodes 4.97
232 TestMultiNode/serial/PingHostFrom2Pods 0.76
233 TestMultiNode/serial/AddNode 23.61
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.65
236 TestMultiNode/serial/CopyFile 9.46
237 TestMultiNode/serial/StopNode 2.22
238 TestMultiNode/serial/StartAfterStop 6.96
239 TestMultiNode/serial/RestartKeepsNodes 72.86
240 TestMultiNode/serial/DeleteNode 5.23
241 TestMultiNode/serial/StopMultiNode 23.97
242 TestMultiNode/serial/RestartMultiNode 44.37
243 TestMultiNode/serial/ValidateNameConflict 26.81
248 TestPreload 111.67
250 TestScheduledStopUnix 98.38
253 TestInsufficientStorage 9.18
254 TestRunningBinaryUpgrade 75.01
257 TestMissingContainerUpgrade 104.43
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
260 TestNoKubernetes/serial/StartWithK8s 31.91
261 TestNoKubernetes/serial/StartWithStopK8s 30.2
269 TestNetworkPlugins/group/false 3.73
273 TestNoKubernetes/serial/Start 9.31
274 TestNoKubernetes/serial/VerifyK8sNotRunning 0.32
275 TestNoKubernetes/serial/ProfileList 1.24
276 TestNoKubernetes/serial/Stop 2.57
277 TestNoKubernetes/serial/StartNoArgs 8.57
278 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.29
279 TestStoppedBinaryUpgrade/Setup 3
280 TestStoppedBinaryUpgrade/Upgrade 51.47
281 TestStoppedBinaryUpgrade/MinikubeLogs 1.36
283 TestPause/serial/Start 41.07
291 TestNetworkPlugins/group/auto/Start 41.56
292 TestPause/serial/SecondStartNoReconfiguration 5.58
293 TestPause/serial/Pause 0.67
294 TestPause/serial/VerifyStatus 0.31
295 TestPause/serial/Unpause 0.63
296 TestNetworkPlugins/group/auto/KubeletFlags 0.32
297 TestPause/serial/PauseAgain 0.8
298 TestNetworkPlugins/group/auto/NetCatPod 9.27
299 TestPause/serial/DeletePaused 2.86
300 TestPause/serial/VerifyDeletedResources 29.04
301 TestNetworkPlugins/group/auto/DNS 0.12
302 TestNetworkPlugins/group/auto/Localhost 0.11
303 TestNetworkPlugins/group/auto/HairPin 0.1
304 TestNetworkPlugins/group/kindnet/Start 39.3
305 TestNetworkPlugins/group/calico/Start 47.1
306 TestNetworkPlugins/group/custom-flannel/Start 54.02
307 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
308 TestNetworkPlugins/group/kindnet/KubeletFlags 0.37
309 TestNetworkPlugins/group/kindnet/NetCatPod 12.3
310 TestNetworkPlugins/group/calico/ControllerPod 6.01
311 TestNetworkPlugins/group/calico/KubeletFlags 0.27
312 TestNetworkPlugins/group/calico/NetCatPod 16.22
313 TestNetworkPlugins/group/kindnet/DNS 0.14
314 TestNetworkPlugins/group/kindnet/Localhost 0.1
315 TestNetworkPlugins/group/kindnet/HairPin 0.1
316 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.28
317 TestNetworkPlugins/group/custom-flannel/NetCatPod 8.25
318 TestNetworkPlugins/group/custom-flannel/DNS 0.15
319 TestNetworkPlugins/group/custom-flannel/Localhost 0.11
320 TestNetworkPlugins/group/custom-flannel/HairPin 0.11
321 TestNetworkPlugins/group/calico/DNS 0.13
322 TestNetworkPlugins/group/calico/Localhost 0.11
323 TestNetworkPlugins/group/calico/HairPin 0.12
324 TestNetworkPlugins/group/enable-default-cni/Start 65.7
325 TestNetworkPlugins/group/flannel/Start 44.44
326 TestNetworkPlugins/group/bridge/Start 37.18
327 TestNetworkPlugins/group/bridge/KubeletFlags 0.28
328 TestNetworkPlugins/group/bridge/NetCatPod 14.18
329 TestNetworkPlugins/group/flannel/ControllerPod 6.01
330 TestNetworkPlugins/group/flannel/KubeletFlags 0.31
331 TestNetworkPlugins/group/flannel/NetCatPod 30.19
332 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.29
333 TestNetworkPlugins/group/enable-default-cni/NetCatPod 22.19
334 TestNetworkPlugins/group/bridge/DNS 0.13
335 TestNetworkPlugins/group/bridge/Localhost 0.11
336 TestNetworkPlugins/group/bridge/HairPin 0.11
337 TestNetworkPlugins/group/enable-default-cni/DNS 0.14
338 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
339 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
341 TestStartStop/group/old-k8s-version/serial/FirstStart 51.43
342 TestNetworkPlugins/group/flannel/DNS 0.14
343 TestNetworkPlugins/group/flannel/Localhost 0.11
344 TestNetworkPlugins/group/flannel/HairPin 0.12
346 TestStartStop/group/no-preload/serial/FirstStart 68.63
348 TestStartStop/group/embed-certs/serial/FirstStart 44.57
349 TestStartStop/group/old-k8s-version/serial/DeployApp 9.3
350 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.93
351 TestStartStop/group/old-k8s-version/serial/Stop 12.29
352 TestStartStop/group/embed-certs/serial/DeployApp 8.24
353 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.17
354 TestStartStop/group/old-k8s-version/serial/SecondStart 50.82
355 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.89
356 TestStartStop/group/embed-certs/serial/Stop 12.18
357 TestStartStop/group/no-preload/serial/DeployApp 9.24
358 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
359 TestStartStop/group/embed-certs/serial/SecondStart 51.48
360 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.89
361 TestStartStop/group/no-preload/serial/Stop 12.86
362 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
363 TestStartStop/group/no-preload/serial/SecondStart 44.12
364 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
365 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
366 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.22
367 TestStartStop/group/old-k8s-version/serial/Pause 2.73
369 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 41.12
370 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
371 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
372 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
373 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.23
374 TestStartStop/group/embed-certs/serial/Pause 2.77
375 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.16
377 TestStartStop/group/newest-cni/serial/FirstStart 29.9
378 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.5
379 TestStartStop/group/no-preload/serial/Pause 3.23
380 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.28
381 TestStartStop/group/newest-cni/serial/DeployApp 0
382 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.79
383 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.85
384 TestStartStop/group/newest-cni/serial/Stop 1.36
385 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.01
386 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.17
387 TestStartStop/group/newest-cni/serial/SecondStart 10.39
388 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
389 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
390 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.22
391 TestStartStop/group/newest-cni/serial/Pause 2.66
392 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
393 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 45.36
394 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
395 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
396 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.23
397 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.71
x
+
TestDownloadOnly/v1.28.0/json-events (13.53s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-119345 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-119345 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (13.531980174s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (13.53s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1019 16:20:42.416512    7254 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime containerd
I1019 16:20:42.416638    7254 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-3708/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-119345
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-119345: exit status 85 (60.979572ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-119345 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-119345 │ jenkins │ v1.37.0 │ 19 Oct 25 16:20 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 16:20:28
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 16:20:28.925266    7266 out.go:360] Setting OutFile to fd 1 ...
	I1019 16:20:28.925511    7266 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:20:28.925520    7266 out.go:374] Setting ErrFile to fd 2...
	I1019 16:20:28.925524    7266 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:20:28.925697    7266 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-3708/.minikube/bin
	W1019 16:20:28.925815    7266 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21683-3708/.minikube/config/config.json: open /home/jenkins/minikube-integration/21683-3708/.minikube/config/config.json: no such file or directory
	I1019 16:20:28.926293    7266 out.go:368] Setting JSON to true
	I1019 16:20:28.927155    7266 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":171,"bootTime":1760890658,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1019 16:20:28.927238    7266 start.go:143] virtualization: kvm guest
	I1019 16:20:28.929533    7266 out.go:99] [download-only-119345] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1019 16:20:28.929695    7266 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21683-3708/.minikube/cache/preloaded-tarball: no such file or directory
	I1019 16:20:28.929715    7266 notify.go:221] Checking for updates...
	I1019 16:20:28.931100    7266 out.go:171] MINIKUBE_LOCATION=21683
	I1019 16:20:28.932526    7266 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 16:20:28.933862    7266 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21683-3708/kubeconfig
	I1019 16:20:28.935168    7266 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-3708/.minikube
	I1019 16:20:28.936395    7266 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1019 16:20:28.938840    7266 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1019 16:20:28.939035    7266 driver.go:422] Setting default libvirt URI to qemu:///system
	I1019 16:20:28.963480    7266 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1019 16:20:28.963550    7266 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 16:20:29.322426    7266 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:31 OomKillDisable:false NGoroutines:66 SystemTime:2025-10-19 16:20:29.311092796 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 16:20:29.322517    7266 docker.go:319] overlay module found
	I1019 16:20:29.324160    7266 out.go:99] Using the docker driver based on user configuration
	I1019 16:20:29.324188    7266 start.go:309] selected driver: docker
	I1019 16:20:29.324194    7266 start.go:930] validating driver "docker" against <nil>
	I1019 16:20:29.324272    7266 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 16:20:29.382344    7266 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:31 OomKillDisable:false NGoroutines:66 SystemTime:2025-10-19 16:20:29.373168184 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 16:20:29.382499    7266 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1019 16:20:29.383024    7266 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1019 16:20:29.383198    7266 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1019 16:20:29.384912    7266 out.go:171] Using Docker driver with root privileges
	I1019 16:20:29.385987    7266 cni.go:84] Creating CNI manager for ""
	I1019 16:20:29.386060    7266 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1019 16:20:29.386073    7266 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1019 16:20:29.386144    7266 start.go:353] cluster config:
	{Name:download-only-119345 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-119345 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 16:20:29.387309    7266 out.go:99] Starting "download-only-119345" primary control-plane node in "download-only-119345" cluster
	I1019 16:20:29.387330    7266 cache.go:124] Beginning downloading kic base image for docker with containerd
	I1019 16:20:29.388414    7266 out.go:99] Pulling base image v0.0.48-1760609789-21757 ...
	I1019 16:20:29.388435    7266 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1019 16:20:29.388539    7266 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 16:20:29.406101    7266 cache.go:153] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 to local cache
	I1019 16:20:29.406295    7266 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory
	I1019 16:20:29.406382    7266 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 to local cache
	I1019 16:20:29.481022    7266 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
	I1019 16:20:29.481064    7266 cache.go:59] Caching tarball of preloaded images
	I1019 16:20:29.481254    7266 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1019 16:20:29.483249    7266 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1019 16:20:29.483273    7266 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4 from gcs api...
	I1019 16:20:29.582517    7266 preload.go:290] Got checksum from GCS API "2746dfda401436a5341e0500068bf339"
	I1019 16:20:29.582651    7266 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:2746dfda401436a5341e0500068bf339 -> /home/jenkins/minikube-integration/21683-3708/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
	I1019 16:20:34.399603    7266 cache.go:156] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 as a tarball
	
	
	* The control-plane node download-only-119345 host does not exist
	  To start a cluster, run: "minikube start -p download-only-119345"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-119345
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (11.51s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-264194 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-264194 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (11.51111326s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (11.51s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1019 16:20:54.346731    7254 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
I1019 16:20:54.346773    7254 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-3708/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-264194
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-264194: exit status 85 (61.273383ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-119345 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-119345 │ jenkins │ v1.37.0 │ 19 Oct 25 16:20 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                 │ minikube             │ jenkins │ v1.37.0 │ 19 Oct 25 16:20 UTC │ 19 Oct 25 16:20 UTC │
	│ delete  │ -p download-only-119345                                                                                                                                                               │ download-only-119345 │ jenkins │ v1.37.0 │ 19 Oct 25 16:20 UTC │ 19 Oct 25 16:20 UTC │
	│ start   │ -o=json --download-only -p download-only-264194 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-264194 │ jenkins │ v1.37.0 │ 19 Oct 25 16:20 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 16:20:42
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 16:20:42.875751    7637 out.go:360] Setting OutFile to fd 1 ...
	I1019 16:20:42.876004    7637 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:20:42.876014    7637 out.go:374] Setting ErrFile to fd 2...
	I1019 16:20:42.876020    7637 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:20:42.876300    7637 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-3708/.minikube/bin
	I1019 16:20:42.876813    7637 out.go:368] Setting JSON to true
	I1019 16:20:42.877704    7637 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":185,"bootTime":1760890658,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1019 16:20:42.877796    7637 start.go:143] virtualization: kvm guest
	I1019 16:20:42.879662    7637 out.go:99] [download-only-264194] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1019 16:20:42.879835    7637 notify.go:221] Checking for updates...
	I1019 16:20:42.881066    7637 out.go:171] MINIKUBE_LOCATION=21683
	I1019 16:20:42.882349    7637 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 16:20:42.883683    7637 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21683-3708/kubeconfig
	I1019 16:20:42.884733    7637 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-3708/.minikube
	I1019 16:20:42.885808    7637 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1019 16:20:42.887713    7637 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1019 16:20:42.887966    7637 driver.go:422] Setting default libvirt URI to qemu:///system
	I1019 16:20:42.910263    7637 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1019 16:20:42.910382    7637 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 16:20:42.966862    7637 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:false NGoroutines:52 SystemTime:2025-10-19 16:20:42.957079224 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 16:20:42.966961    7637 docker.go:319] overlay module found
	I1019 16:20:42.968500    7637 out.go:99] Using the docker driver based on user configuration
	I1019 16:20:42.968529    7637 start.go:309] selected driver: docker
	I1019 16:20:42.968540    7637 start.go:930] validating driver "docker" against <nil>
	I1019 16:20:42.968613    7637 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 16:20:43.032101    7637 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:false NGoroutines:52 SystemTime:2025-10-19 16:20:43.021812121 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 16:20:43.032279    7637 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1019 16:20:43.032761    7637 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1019 16:20:43.032904    7637 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1019 16:20:43.034902    7637 out.go:171] Using Docker driver with root privileges
	I1019 16:20:43.036104    7637 cni.go:84] Creating CNI manager for ""
	I1019 16:20:43.036158    7637 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1019 16:20:43.036170    7637 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1019 16:20:43.036222    7637 start.go:353] cluster config:
	{Name:download-only-264194 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-264194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 16:20:43.037535    7637 out.go:99] Starting "download-only-264194" primary control-plane node in "download-only-264194" cluster
	I1019 16:20:43.037562    7637 cache.go:124] Beginning downloading kic base image for docker with containerd
	I1019 16:20:43.038788    7637 out.go:99] Pulling base image v0.0.48-1760609789-21757 ...
	I1019 16:20:43.038810    7637 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1019 16:20:43.038926    7637 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 16:20:43.055458    7637 cache.go:153] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 to local cache
	I1019 16:20:43.055584    7637 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory
	I1019 16:20:43.055600    7637 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory, skipping pull
	I1019 16:20:43.055605    7637 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in cache, skipping pull
	I1019 16:20:43.055613    7637 cache.go:156] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 as a tarball
	I1019 16:20:43.381399    7637 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1019 16:20:43.381454    7637 cache.go:59] Caching tarball of preloaded images
	I1019 16:20:43.381646    7637 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1019 16:20:43.383587    7637 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1019 16:20:43.383612    7637 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 from gcs api...
	I1019 16:20:43.490117    7637 preload.go:290] Got checksum from GCS API "5d6e976daeaa84851976fc4d674fd8f4"
	I1019 16:20:43.490184    7637 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4?checksum=md5:5d6e976daeaa84851976fc4d674fd8f4 -> /home/jenkins/minikube-integration/21683-3708/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-264194 host does not exist
	  To start a cluster, run: "minikube start -p download-only-264194"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-264194
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.38s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-694958 --alsologtostderr --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "download-docker-694958" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-694958
--- PASS: TestDownloadOnlyKic (0.38s)

                                                
                                    
x
+
TestBinaryMirror (0.8s)

                                                
                                                
=== RUN   TestBinaryMirror
I1019 16:20:55.414190    7254 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-546069 --alsologtostderr --binary-mirror http://127.0.0.1:37849 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-546069" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-546069
--- PASS: TestBinaryMirror (0.80s)

                                                
                                    
x
+
TestOffline (55.94s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-755604 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-755604 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (52.499785282s)
helpers_test.go:175: Cleaning up "offline-containerd-755604" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-755604
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-755604: (3.44242279s)
--- PASS: TestOffline (55.94s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-774290
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-774290: exit status 85 (55.345271ms)

                                                
                                                
-- stdout --
	* Profile "addons-774290" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-774290"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-774290
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-774290: exit status 85 (52.574529ms)

                                                
                                                
-- stdout --
	* Profile "addons-774290" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-774290"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (161.31s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-774290 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-774290 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m41.305143994s)
--- PASS: TestAddons/Setup (161.31s)

                                                
                                    
x
+
TestAddons/serial/Volcano (39.14s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:884: volcano-controller stabilized in 15.149181ms
addons_test.go:876: volcano-admission stabilized in 15.186567ms
addons_test.go:868: volcano-scheduler stabilized in 15.411826ms
addons_test.go:890: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-scheduler-76c996c8bf-nx6nf" [2d846cd9-3360-41c3-990c-e607fdf030ea] Running
addons_test.go:890: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.00296106s
addons_test.go:894: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-admission-6c447bd768-7g6s4" [e5566668-7b05-4863-b442-91238ab17f06] Running
addons_test.go:894: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003222726s
addons_test.go:898: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-controllers-6fd4f85cb8-2jv6h" [2ed29751-4331-4d3f-be51-f30e35e841fb] Running
addons_test.go:898: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003817597s
addons_test.go:903: (dbg) Run:  kubectl --context addons-774290 delete -n volcano-system job volcano-admission-init
addons_test.go:909: (dbg) Run:  kubectl --context addons-774290 create -f testdata/vcjob.yaml
addons_test.go:917: (dbg) Run:  kubectl --context addons-774290 get vcjob -n my-volcano
addons_test.go:935: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:352: "test-job-nginx-0" [83a21089-1eca-494a-a5fb-82d60c2bdf74] Pending
helpers_test.go:352: "test-job-nginx-0" [83a21089-1eca-494a-a5fb-82d60c2bdf74] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "test-job-nginx-0" [83a21089-1eca-494a-a5fb-82d60c2bdf74] Running
addons_test.go:935: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 12.003624167s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-774290 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-774290 addons disable volcano --alsologtostderr -v=1: (11.771596466s)
--- PASS: TestAddons/serial/Volcano (39.14s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-774290 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-774290 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.45s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-774290 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-774290 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [813322d8-58ce-46c4-9bba-ac756254cfb2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [813322d8-58ce-46c4-9bba-ac756254cfb2] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.003217235s
addons_test.go:694: (dbg) Run:  kubectl --context addons-774290 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-774290 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-774290 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.45s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 22.416495ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-zj4w5" [eaa53b69-ebcb-4718-ab27-ac2812fcb7ff] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.03940495s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-kq9st" [d8a0457f-aa1f-44f9-a82a-3cd93741417d] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003215742s
addons_test.go:392: (dbg) Run:  kubectl --context addons-774290 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-774290 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-774290 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.17284444s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-774290 ip
2025/10/19 16:24:50 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-774290 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.00s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.64s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 2.676279ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-774290
addons_test.go:332: (dbg) Run:  kubectl --context addons-774290 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-774290 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.64s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.03s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-774290 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-774290 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-774290 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [34190e02-af07-42ed-b20e-fb1d476df57e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [34190e02-af07-42ed-b20e-fb1d476df57e] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.004259767s
I1019 16:25:01.696111    7254 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-774290 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-774290 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-774290 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-774290 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-774290 addons disable ingress-dns --alsologtostderr -v=1: (1.03314645s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-774290 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-774290 addons disable ingress --alsologtostderr -v=1: (7.724528396s)
--- PASS: TestAddons/parallel/Ingress (20.03s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.29s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-9rkb2" [68167230-a9be-4761-9264-4ae33a18bdcd] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004116933s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-774290 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (5.29s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.65s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.592886ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-np4z4" [b15e0a35-4321-49fa-ba85-1d990ee38602] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003464549s
addons_test.go:463: (dbg) Run:  kubectl --context addons-774290 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-774290 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.65s)

                                                
                                    
x
+
TestAddons/parallel/CSI (50.81s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.099395ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-774290 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774290 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774290 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774290 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774290 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774290 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774290 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774290 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774290 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774290 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774290 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774290 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774290 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774290 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774290 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774290 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774290 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-774290 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [c939b8aa-7744-46c8-ada1-78ff3ee05da6] Pending
helpers_test.go:352: "task-pv-pod" [c939b8aa-7744-46c8-ada1-78ff3ee05da6] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [c939b8aa-7744-46c8-ada1-78ff3ee05da6] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.002703965s
addons_test.go:572: (dbg) Run:  kubectl --context addons-774290 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-774290 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-774290 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-774290 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-774290 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-774290 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774290 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774290 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774290 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774290 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774290 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774290 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774290 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774290 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774290 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774290 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774290 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-774290 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [409f12ca-374b-4aac-9ef8-799a94a690fd] Pending
helpers_test.go:352: "task-pv-pod-restore" [409f12ca-374b-4aac-9ef8-799a94a690fd] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [409f12ca-374b-4aac-9ef8-799a94a690fd] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003080331s
addons_test.go:614: (dbg) Run:  kubectl --context addons-774290 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-774290 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-774290 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-774290 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-774290 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-774290 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.52312066s)
--- PASS: TestAddons/parallel/CSI (50.81s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.56s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-774290 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-6945c6f4d-29ps8" [0b2c89fb-b04e-42bd-b16e-65320ec12f61] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6945c6f4d-29ps8" [0b2c89fb-b04e-42bd-b16e-65320ec12f61] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.003378319s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-774290 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-774290 addons disable headlamp --alsologtostderr -v=1: (5.790322812s)
--- PASS: TestAddons/parallel/Headlamp (16.56s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.48s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-c45ch" [2fdf7430-d5f3-4ea0-b1cf-d84cfd8a6db8] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003128517s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-774290 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.48s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (56.61s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-774290 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-774290 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774290 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774290 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774290 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774290 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774290 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774290 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774290 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774290 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774290 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [230cbb8e-1fa8-4442-bb97-1e4526a8dfe1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [230cbb8e-1fa8-4442-bb97-1e4526a8dfe1] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [230cbb8e-1fa8-4442-bb97-1e4526a8dfe1] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.003168613s
addons_test.go:967: (dbg) Run:  kubectl --context addons-774290 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-774290 ssh "cat /opt/local-path-provisioner/pvc-2b463340-a3be-4b1b-bda7-badd4f0d7ee4_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-774290 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-774290 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-774290 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-774290 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.741485186s)
--- PASS: TestAddons/parallel/LocalPath (56.61s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.56s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
I1019 16:24:36.240884    7254 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-rfvvv" [90fd5b1e-458a-4265-a649-003065a2a461] Running
I1019 16:24:36.243911    7254 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1019 16:24:36.243933    7254 kapi.go:107] duration metric: took 3.087881ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.061519794s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-774290 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.56s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.65s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-9hpkb" [538d1102-76c3-42a2-a848-35061fa1cff1] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003290521s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-774290 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-774290 addons disable yakd --alsologtostderr -v=1: (5.648705417s)
--- PASS: TestAddons/parallel/Yakd (11.65s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.51s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-srsdm" [1822fc17-a383-456b-afeb-e12bf5431c94] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.003740529s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-774290 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (5.51s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.27s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-774290
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-774290: (12.020951125s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-774290
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-774290
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-774290
--- PASS: TestAddons/StoppedEnableDisable (12.27s)

                                                
                                    
x
+
TestCertOptions (26.38s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-952783 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-952783 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (22.35028665s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-952783 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-952783 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-952783 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-952783" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-952783
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-952783: (3.352887842s)
--- PASS: TestCertOptions (26.38s)

                                                
                                    
x
+
TestCertExpiration (214.96s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-226147 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-226147 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (27.110900949s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-226147 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-226147 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (5.398281841s)
helpers_test.go:175: Cleaning up "cert-expiration-226147" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-226147
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-226147: (2.452019832s)
--- PASS: TestCertExpiration (214.96s)

                                                
                                    
x
+
TestForceSystemdFlag (39.02s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-804392 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-804392 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (36.557243025s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-804392 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-804392" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-804392
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-804392: (2.163849198s)
--- PASS: TestForceSystemdFlag (39.02s)

                                                
                                    
x
+
TestForceSystemdEnv (34.42s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-782449 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-782449 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (31.28698407s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-782449 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-782449" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-782449
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-782449: (2.733748466s)
--- PASS: TestForceSystemdEnv (34.42s)

                                                
                                    
x
+
TestDockerEnvContainerd (39.27s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux amd64
docker_test.go:181: (dbg) Run:  out/minikube-linux-amd64 start -p dockerenv-073466 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-amd64 start -p dockerenv-073466 --driver=docker  --container-runtime=containerd: (23.039382932s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-amd64 docker-env --ssh-host --ssh-add -p dockerenv-073466"
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXAkTcPE/agent.33806" SSH_AGENT_PID="33807" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXAkTcPE/agent.33806" SSH_AGENT_PID="33807" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXAkTcPE/agent.33806" SSH_AGENT_PID="33807" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.960696359s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXAkTcPE/agent.33806" SSH_AGENT_PID="33807" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-073466" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p dockerenv-073466
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p dockerenv-073466: (2.363683878s)
--- PASS: TestDockerEnvContainerd (39.27s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.07s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I1019 16:56:09.416848    7254 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1019 16:56:09.417009    7254 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate3049291120/001:/home/jenkins/workspace/Docker_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1019 16:56:09.447728    7254 install.go:163] /tmp/TestKVMDriverInstallOrUpdate3049291120/001/docker-machine-driver-kvm2 version is 1.1.1
W1019 16:56:09.447780    7254 install.go:76] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.37.0
W1019 16:56:09.447873    7254 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1019 16:56:09.447918    7254 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3049291120/001/docker-machine-driver-kvm2
I1019 16:56:10.347134    7254 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate3049291120/001:/home/jenkins/workspace/Docker_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1019 16:56:10.364210    7254 install.go:163] /tmp/TestKVMDriverInstallOrUpdate3049291120/001/docker-machine-driver-kvm2 version is 1.37.0
--- PASS: TestKVMDriverInstallOrUpdate (1.07s)

                                                
                                    
x
+
TestErrorSpam/setup (23.61s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-102564 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-102564 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-102564 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-102564 --driver=docker  --container-runtime=containerd: (23.608257973s)
--- PASS: TestErrorSpam/setup (23.61s)

                                                
                                    
x
+
TestErrorSpam/start (0.66s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-102564 --log_dir /tmp/nospam-102564 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-102564 --log_dir /tmp/nospam-102564 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-102564 --log_dir /tmp/nospam-102564 start --dry-run
--- PASS: TestErrorSpam/start (0.66s)

                                                
                                    
x
+
TestErrorSpam/status (0.92s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-102564 --log_dir /tmp/nospam-102564 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-102564 --log_dir /tmp/nospam-102564 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-102564 --log_dir /tmp/nospam-102564 status
--- PASS: TestErrorSpam/status (0.92s)

                                                
                                    
x
+
TestErrorSpam/pause (1.42s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-102564 --log_dir /tmp/nospam-102564 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-102564 --log_dir /tmp/nospam-102564 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-102564 --log_dir /tmp/nospam-102564 pause
--- PASS: TestErrorSpam/pause (1.42s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.48s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-102564 --log_dir /tmp/nospam-102564 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-102564 --log_dir /tmp/nospam-102564 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-102564 --log_dir /tmp/nospam-102564 unpause
--- PASS: TestErrorSpam/unpause (1.48s)

                                                
                                    
x
+
TestErrorSpam/stop (2.07s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-102564 --log_dir /tmp/nospam-102564 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-102564 --log_dir /tmp/nospam-102564 stop: (1.888344127s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-102564 --log_dir /tmp/nospam-102564 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-102564 --log_dir /tmp/nospam-102564 stop
--- PASS: TestErrorSpam/stop (2.07s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21683-3708/.minikube/files/etc/test/nested/copy/7254/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (39.23s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-761710 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-761710 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (39.2295093s)
--- PASS: TestFunctional/serial/StartWithProxy (39.23s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.21s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1019 16:28:00.654106    7254 config.go:182] Loaded profile config "functional-761710": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-761710 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-761710 --alsologtostderr -v=8: (6.212437558s)
functional_test.go:678: soft start took 6.213100922s for "functional-761710" cluster.
I1019 16:28:06.866886    7254 config.go:182] Loaded profile config "functional-761710": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (6.21s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-761710 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.78s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.78s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.91s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-761710 /tmp/TestFunctionalserialCacheCmdcacheadd_local1776475144/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 cache add minikube-local-cache-test:functional-761710
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-761710 cache add minikube-local-cache-test:functional-761710: (1.562376593s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 cache delete minikube-local-cache-test:functional-761710
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-761710
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.91s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.52s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-761710 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (272.441646ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.52s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 kubectl -- --context functional-761710 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-761710 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (47.98s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-761710 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1019 16:28:37.578786    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/addons-774290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:28:37.585128    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/addons-774290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:28:37.596494    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/addons-774290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:28:37.617948    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/addons-774290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:28:37.659364    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/addons-774290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:28:37.740849    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/addons-774290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:28:37.902432    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/addons-774290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:28:38.224118    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/addons-774290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:28:38.866148    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/addons-774290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:28:40.148223    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/addons-774290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:28:42.710175    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/addons-774290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:28:47.831580    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/addons-774290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:28:58.073012    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/addons-774290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-761710 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (47.982577572s)
functional_test.go:776: restart took 47.982695848s for "functional-761710" cluster.
I1019 16:29:01.879770    7254 config.go:182] Loaded profile config "functional-761710": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (47.98s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-761710 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.21s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-761710 logs: (1.213012806s)
--- PASS: TestFunctional/serial/LogsCmd (1.21s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.29s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 logs --file /tmp/TestFunctionalserialLogsFileCmd2311085764/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-761710 logs --file /tmp/TestFunctionalserialLogsFileCmd2311085764/001/logs.txt: (1.284638058s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.29s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.21s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-761710 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-761710
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-761710: exit status 115 (334.814446ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32504 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-761710 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.21s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-761710 config get cpus: exit status 14 (96.222129ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-761710 config get cpus: exit status 14 (57.765463ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-761710 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-761710 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (173.371867ms)

                                                
                                                
-- stdout --
	* [functional-761710] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-3708/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-3708/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 16:29:22.891507   52592 out.go:360] Setting OutFile to fd 1 ...
	I1019 16:29:22.891790   52592 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:29:22.891800   52592 out.go:374] Setting ErrFile to fd 2...
	I1019 16:29:22.891806   52592 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:29:22.892004   52592 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-3708/.minikube/bin
	I1019 16:29:22.892477   52592 out.go:368] Setting JSON to false
	I1019 16:29:22.893480   52592 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":705,"bootTime":1760890658,"procs":263,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1019 16:29:22.893570   52592 start.go:143] virtualization: kvm guest
	I1019 16:29:22.897384   52592 out.go:179] * [functional-761710] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1019 16:29:22.898814   52592 out.go:179]   - MINIKUBE_LOCATION=21683
	I1019 16:29:22.898854   52592 notify.go:221] Checking for updates...
	I1019 16:29:22.900948   52592 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 16:29:22.902269   52592 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-3708/kubeconfig
	I1019 16:29:22.903636   52592 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-3708/.minikube
	I1019 16:29:22.905464   52592 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1019 16:29:22.906830   52592 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 16:29:22.908298   52592 config.go:182] Loaded profile config "functional-761710": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1019 16:29:22.908808   52592 driver.go:422] Setting default libvirt URI to qemu:///system
	I1019 16:29:22.936450   52592 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1019 16:29:22.936553   52592 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 16:29:23.000362   52592 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:false NGoroutines:58 SystemTime:2025-10-19 16:29:22.98920774 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 16:29:23.000479   52592 docker.go:319] overlay module found
	I1019 16:29:23.002230   52592 out.go:179] * Using the docker driver based on existing profile
	I1019 16:29:23.003488   52592 start.go:309] selected driver: docker
	I1019 16:29:23.003505   52592 start.go:930] validating driver "docker" against &{Name:functional-761710 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-761710 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 16:29:23.003613   52592 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 16:29:23.005252   52592 out.go:203] 
	W1019 16:29:23.006489   52592 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1019 16:29:23.007677   52592 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-761710 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-761710 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-761710 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (172.627239ms)

                                                
                                                
-- stdout --
	* [functional-761710] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-3708/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-3708/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 16:29:22.713274   52428 out.go:360] Setting OutFile to fd 1 ...
	I1019 16:29:22.713390   52428 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:29:22.713401   52428 out.go:374] Setting ErrFile to fd 2...
	I1019 16:29:22.713406   52428 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:29:22.713726   52428 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-3708/.minikube/bin
	I1019 16:29:22.714223   52428 out.go:368] Setting JSON to false
	I1019 16:29:22.715424   52428 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":705,"bootTime":1760890658,"procs":261,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1019 16:29:22.715520   52428 start.go:143] virtualization: kvm guest
	I1019 16:29:22.717806   52428 out.go:179] * [functional-761710] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1019 16:29:22.719414   52428 out.go:179]   - MINIKUBE_LOCATION=21683
	I1019 16:29:22.719417   52428 notify.go:221] Checking for updates...
	I1019 16:29:22.722023   52428 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 16:29:22.723475   52428 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-3708/kubeconfig
	I1019 16:29:22.724934   52428 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-3708/.minikube
	I1019 16:29:22.726239   52428 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1019 16:29:22.727508   52428 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 16:29:22.729234   52428 config.go:182] Loaded profile config "functional-761710": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1019 16:29:22.729816   52428 driver.go:422] Setting default libvirt URI to qemu:///system
	I1019 16:29:22.756219   52428 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1019 16:29:22.756293   52428 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 16:29:22.821940   52428 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:false NGoroutines:57 SystemTime:2025-10-19 16:29:22.810793666 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 16:29:22.822097   52428 docker.go:319] overlay module found
	I1019 16:29:22.824250   52428 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1019 16:29:22.825629   52428 start.go:309] selected driver: docker
	I1019 16:29:22.825644   52428 start.go:930] validating driver "docker" against &{Name:functional-761710 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-761710 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 16:29:22.825767   52428 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 16:29:22.827812   52428 out.go:203] 
	W1019 16:29:22.829016   52428 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1019 16:29:22.831950   52428 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-761710 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-761710 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-4w6jk" [6d2d3d11-a218-4e02-9a5b-c730ea895555] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-4w6jk" [6d2d3d11-a218-4e02-9a5b-c730ea895555] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.003546288s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 service hello-node-connect --url
E1019 16:29:18.555145    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/addons-774290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:31893
functional_test.go:1680: http://192.168.49.2:31893: success! body:
Request served by hello-node-connect-7d85dfc575-4w6jk

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:31893
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.55s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [6157c3bc-7577-4717-a593-b3e77d6d9abb] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004402276s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-761710 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-761710 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-761710 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-761710 apply -f testdata/storage-provisioner/pod.yaml
I1019 16:29:15.444077    7254 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [5c26d827-6696-4d6e-922c-833feeea478d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [5c26d827-6696-4d6e-922c-833feeea478d] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.004206329s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-761710 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-761710 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-761710 delete -f testdata/storage-provisioner/pod.yaml: (1.15480099s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-761710 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [325b2d03-0014-4bc6-9feb-b2a4de97f754] Pending
helpers_test.go:352: "sp-pod" [325b2d03-0014-4bc6-9feb-b2a4de97f754] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [325b2d03-0014-4bc6-9feb-b2a4de97f754] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.003737619s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-761710 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.91s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 ssh -n functional-761710 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 cp functional-761710:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd124758359/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 ssh -n functional-761710 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 ssh -n functional-761710 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.56s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (366.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-761710 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-f6zqq" [a065c342-53de-4ec8-ad51-73316b849dc7] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-f6zqq" [a065c342-53de-4ec8-ad51-73316b849dc7] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 6m5.002989278s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-761710 exec mysql-5bb876957f-f6zqq -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-761710 exec mysql-5bb876957f-f6zqq -- mysql -ppassword -e "show databases;": exit status 1 (111.31636ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1019 16:35:33.455917    7254 retry.go:31] will retry after 1.147792968s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-761710 exec mysql-5bb876957f-f6zqq -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (366.56s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/7254/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 ssh "sudo cat /etc/test/nested/copy/7254/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/7254.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 ssh "sudo cat /etc/ssl/certs/7254.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/7254.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 ssh "sudo cat /usr/share/ca-certificates/7254.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/72542.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 ssh "sudo cat /etc/ssl/certs/72542.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/72542.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 ssh "sudo cat /usr/share/ca-certificates/72542.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.60s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-761710 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-761710 ssh "sudo systemctl is-active docker": exit status 1 (294.877541ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-761710 ssh "sudo systemctl is-active crio": exit status 1 (290.408212ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-761710 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-761710 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-761710 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-761710 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 49354: os: process already finished
helpers_test.go:525: unable to kill pid 49031: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-761710 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (12.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-761710 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [5501c62b-ae75-43c4-8259-7f05fb49c11a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [5501c62b-ae75-43c4-8259-7f05fb49c11a] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 12.003322224s
I1019 16:29:21.455357    7254 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (12.24s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-761710 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-761710 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-5nt4t" [3174c6e9-903f-4679-be2d-5b92a81a3922] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-5nt4t" [3174c6e9-903f-4679-be2d-5b92a81a3922] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.003250046s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.17s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "344.253268ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "50.222254ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 service list -o json
functional_test.go:1504: Took "501.629818ms" to run "out/minikube-linux-amd64 -p functional-761710 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "329.935951ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "50.423456ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:31311
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-761710 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-761710
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:functional-761710
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-761710 image ls --format short --alsologtostderr:
I1019 16:29:37.096776   57739 out.go:360] Setting OutFile to fd 1 ...
I1019 16:29:37.097202   57739 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1019 16:29:37.097222   57739 out.go:374] Setting ErrFile to fd 2...
I1019 16:29:37.097230   57739 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1019 16:29:37.097474   57739 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-3708/.minikube/bin
I1019 16:29:37.098137   57739 config.go:182] Loaded profile config "functional-761710": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1019 16:29:37.098252   57739 config.go:182] Loaded profile config "functional-761710": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1019 16:29:37.098650   57739 cli_runner.go:164] Run: docker container inspect functional-761710 --format={{.State.Status}}
I1019 16:29:37.121217   57739 ssh_runner.go:195] Run: systemctl --version
I1019 16:29:37.121278   57739 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-761710
I1019 16:29:37.142657   57739 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-3708/.minikube/machines/functional-761710/id_rsa Username:docker}
I1019 16:29:37.238906   57739 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-761710 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                 │ sha256:6e38f4 │ 9.06MB │
│ registry.k8s.io/pause                       │ 3.1                │ sha256:da86e6 │ 315kB  │
│ registry.k8s.io/pause                       │ 3.10.1             │ sha256:cd073f │ 320kB  │
│ registry.k8s.io/pause                       │ latest             │ sha256:350b16 │ 72.3kB │
│ docker.io/library/nginx                     │ alpine             │ sha256:5e7abc │ 22.6MB │
│ docker.io/library/nginx                     │ latest             │ sha256:07ccdb │ 62.7MB │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc       │ sha256:56cc51 │ 2.4MB  │
│ registry.k8s.io/kube-apiserver              │ v1.34.1            │ sha256:c3994b │ 27.1MB │
│ docker.io/library/minikube-local-cache-test │ functional-761710  │ sha256:cb8695 │ 992B   │
│ registry.k8s.io/kube-controller-manager     │ v1.34.1            │ sha256:c80c8d │ 22.8MB │
│ registry.k8s.io/kube-proxy                  │ v1.34.1            │ sha256:fc2517 │ 26MB   │
│ registry.k8s.io/kube-scheduler              │ v1.34.1            │ sha256:7dd6aa │ 17.4MB │
│ registry.k8s.io/coredns/coredns             │ v1.12.1            │ sha256:52546a │ 22.4MB │
│ registry.k8s.io/etcd                        │ 3.6.4-0            │ sha256:5f1f52 │ 74.3MB │
│ registry.k8s.io/pause                       │ 3.3                │ sha256:0184c1 │ 298kB  │
│ docker.io/kicbase/echo-server               │ functional-761710  │ sha256:9056ab │ 2.37MB │
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b │ sha256:409467 │ 44.4MB │
└─────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-761710 image ls --format table --alsologtostderr:
I1019 16:29:37.770537   58180 out.go:360] Setting OutFile to fd 1 ...
I1019 16:29:37.770796   58180 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1019 16:29:37.770806   58180 out.go:374] Setting ErrFile to fd 2...
I1019 16:29:37.770811   58180 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1019 16:29:37.770987   58180 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-3708/.minikube/bin
I1019 16:29:37.771559   58180 config.go:182] Loaded profile config "functional-761710": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1019 16:29:37.771646   58180 config.go:182] Loaded profile config "functional-761710": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1019 16:29:37.771999   58180 cli_runner.go:164] Run: docker container inspect functional-761710 --format={{.State.Status}}
I1019 16:29:37.793938   58180 ssh_runner.go:195] Run: systemctl --version
I1019 16:29:37.793990   58180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-761710
I1019 16:29:37.813666   58180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-3708/.minikube/machines/functional-761710/id_rsa Username:docker}
I1019 16:29:37.910684   58180 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-761710 image ls --format json --alsologtostderr:
[{"id":"sha256:cb8695773ad004a537696c2c3596e8327fcf9800795b63cb0a4ae8846d9b839e","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-761710"],"size":"992"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"22820214"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"},{"id":"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests
":["registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"74311308"},{"id":"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"25963718"},{"id":"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"17385568"},{"id":"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-761710"],"size":"2372971"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb992
50061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"22384805"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"320448"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:409467f978b4a30fe717012736557d637f66371452c
3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"44375501"},{"id":"sha256:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5","repoDigests":["docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22"],"repoTags":["docker.io/library/nginx:alpine"],"size":"22596807"},{"id":"sha256:07ccdb7838758e758a4d52a9761636c385125a327355c0c94a6acff9babff938","repoDigests":["docker.io/library/nginx@sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6"],"repoTags":["docker.io/library/nginx:latest"],"size":"62706233"},{"id":"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"27061991"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-761710 image ls --format json --alsologtostderr:
I1019 16:29:37.552461   58029 out.go:360] Setting OutFile to fd 1 ...
I1019 16:29:37.552743   58029 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1019 16:29:37.552753   58029 out.go:374] Setting ErrFile to fd 2...
I1019 16:29:37.552757   58029 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1019 16:29:37.552979   58029 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-3708/.minikube/bin
I1019 16:29:37.553624   58029 config.go:182] Loaded profile config "functional-761710": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1019 16:29:37.553706   58029 config.go:182] Loaded profile config "functional-761710": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1019 16:29:37.554112   58029 cli_runner.go:164] Run: docker container inspect functional-761710 --format={{.State.Status}}
I1019 16:29:37.573932   58029 ssh_runner.go:195] Run: systemctl --version
I1019 16:29:37.574009   58029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-761710
I1019 16:29:37.595665   58029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-3708/.minikube/machines/functional-761710/id_rsa Username:docker}
I1019 16:29:37.693402   58029 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-761710 image ls --format yaml --alsologtostderr:
- id: sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "17385568"
- id: sha256:cb8695773ad004a537696c2c3596e8327fcf9800795b63cb0a4ae8846d9b839e
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-761710
size: "992"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5
repoDigests:
- docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22
repoTags:
- docker.io/library/nginx:alpine
size: "22596807"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "25963718"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-761710
size: "2372971"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "22384805"
- id: sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "74311308"
- id: sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "27061991"
- id: sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "22820214"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "320448"
- id: sha256:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "44375501"
- id: sha256:07ccdb7838758e758a4d52a9761636c385125a327355c0c94a6acff9babff938
repoDigests:
- docker.io/library/nginx@sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6
repoTags:
- docker.io/library/nginx:latest
size: "62706233"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-761710 image ls --format yaml --alsologtostderr:
I1019 16:29:37.320576   57883 out.go:360] Setting OutFile to fd 1 ...
I1019 16:29:37.320837   57883 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1019 16:29:37.320847   57883 out.go:374] Setting ErrFile to fd 2...
I1019 16:29:37.320851   57883 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1019 16:29:37.321125   57883 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-3708/.minikube/bin
I1019 16:29:37.321724   57883 config.go:182] Loaded profile config "functional-761710": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1019 16:29:37.321829   57883 config.go:182] Loaded profile config "functional-761710": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1019 16:29:37.322291   57883 cli_runner.go:164] Run: docker container inspect functional-761710 --format={{.State.Status}}
I1019 16:29:37.342420   57883 ssh_runner.go:195] Run: systemctl --version
I1019 16:29:37.342477   57883 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-761710
I1019 16:29:37.362715   57883 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-3708/.minikube/machines/functional-761710/id_rsa Username:docker}
I1019 16:29:37.460255   57883 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-761710 ssh pgrep buildkitd: exit status 1 (267.929002ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 image build -t localhost/my-image:functional-761710 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-761710 image build -t localhost/my-image:functional-761710 testdata/build --alsologtostderr: (3.006370011s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-761710 image build -t localhost/my-image:functional-761710 testdata/build --alsologtostderr:
I1019 16:29:37.808355   58192 out.go:360] Setting OutFile to fd 1 ...
I1019 16:29:37.808654   58192 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1019 16:29:37.808665   58192 out.go:374] Setting ErrFile to fd 2...
I1019 16:29:37.808668   58192 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1019 16:29:37.808881   58192 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-3708/.minikube/bin
I1019 16:29:37.809607   58192 config.go:182] Loaded profile config "functional-761710": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1019 16:29:37.810263   58192 config.go:182] Loaded profile config "functional-761710": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1019 16:29:37.810786   58192 cli_runner.go:164] Run: docker container inspect functional-761710 --format={{.State.Status}}
I1019 16:29:37.829456   58192 ssh_runner.go:195] Run: systemctl --version
I1019 16:29:37.829504   58192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-761710
I1019 16:29:37.849080   58192 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-3708/.minikube/machines/functional-761710/id_rsa Username:docker}
I1019 16:29:37.944096   58192 build_images.go:162] Building image from path: /tmp/build.2817674645.tar
I1019 16:29:37.944150   58192 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1019 16:29:37.953976   58192 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2817674645.tar
I1019 16:29:37.957771   58192 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2817674645.tar: stat -c "%s %y" /var/lib/minikube/build/build.2817674645.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2817674645.tar': No such file or directory
I1019 16:29:37.957795   58192 ssh_runner.go:362] scp /tmp/build.2817674645.tar --> /var/lib/minikube/build/build.2817674645.tar (3072 bytes)
I1019 16:29:37.977182   58192 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2817674645
I1019 16:29:37.986837   58192 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2817674645 -xf /var/lib/minikube/build/build.2817674645.tar
I1019 16:29:37.995430   58192 containerd.go:394] Building image: /var/lib/minikube/build/build.2817674645
I1019 16:29:37.995491   58192 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2817674645 --local dockerfile=/var/lib/minikube/build/build.2817674645 --output type=image,name=localhost/my-image:functional-761710
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.8s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.4s done
#5 DONE 0.4s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:248fe2f0b94a84e291bbf695d62bc69d3592b6f7f9c23db7cf13f2519dd16ccc done
#8 exporting config sha256:e5c9dd0145d9ddc837234a5075d2f6d1a303611eaf3d0c73468e0ef95db0d73b done
#8 naming to localhost/my-image:functional-761710
#8 naming to localhost/my-image:functional-761710 done
#8 DONE 0.1s
I1019 16:29:40.742903   58192 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2817674645 --local dockerfile=/var/lib/minikube/build/build.2817674645 --output type=image,name=localhost/my-image:functional-761710: (2.747375745s)
I1019 16:29:40.742979   58192 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2817674645
I1019 16:29:40.751482   58192 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2817674645.tar
I1019 16:29:40.759301   58192 build_images.go:218] Built localhost/my-image:functional-761710 from /tmp/build.2817674645.tar
I1019 16:29:40.759359   58192 build_images.go:134] succeeded building to: functional-761710
I1019 16:29:40.759371   58192 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 image ls
E1019 16:29:59.517014    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/addons-774290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:31:21.439314    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/addons-774290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:33:37.573271    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/addons-774290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:34:05.281691    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/addons-774290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.722705689s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-761710
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31311
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-761710 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-761710 /tmp/TestFunctionalparallelMountCmdany-port3271928396/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1760891361477542369" to /tmp/TestFunctionalparallelMountCmdany-port3271928396/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1760891361477542369" to /tmp/TestFunctionalparallelMountCmdany-port3271928396/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1760891361477542369" to /tmp/TestFunctionalparallelMountCmdany-port3271928396/001/test-1760891361477542369
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-761710 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (284.48184ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1019 16:29:21.762350    7254 retry.go:31] will retry after 686.958298ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 19 16:29 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 19 16:29 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 19 16:29 test-1760891361477542369
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 ssh cat /mount-9p/test-1760891361477542369
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-761710 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [28c8c7b0-33f3-4f56-9279-a9e083888db1] Pending
helpers_test.go:352: "busybox-mount" [28c8c7b0-33f3-4f56-9279-a9e083888db1] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [28c8c7b0-33f3-4f56-9279-a9e083888db1] Running
helpers_test.go:352: "busybox-mount" [28c8c7b0-33f3-4f56-9279-a9e083888db1] Running / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [28c8c7b0-33f3-4f56-9279-a9e083888db1] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.003326656s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-761710 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-761710 /tmp/TestFunctionalparallelMountCmdany-port3271928396/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.98s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.109.192.131 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-761710 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 image load --daemon kicbase/echo-server:functional-761710 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 image load --daemon kicbase/echo-server:functional-761710 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-761710
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 image load --daemon kicbase/echo-server:functional-761710 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 image save kicbase/echo-server:functional-761710 /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 image rm kicbase/echo-server:functional-761710 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-761710
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 image save --daemon kicbase/echo-server:functional-761710 --alsologtostderr
I1019 16:29:27.853951    7254 detect.go:223] nested VM detected
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-761710
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-761710 /tmp/TestFunctionalparallelMountCmdspecific-port1670123339/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-761710 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (270.093761ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1019 16:29:30.724240    7254 retry.go:31] will retry after 291.230914ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-761710 /tmp/TestFunctionalparallelMountCmdspecific-port1670123339/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-761710 ssh "sudo umount -f /mount-9p": exit status 1 (271.857624ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-761710 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-761710 /tmp/TestFunctionalparallelMountCmdspecific-port1670123339/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.56s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-761710 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3004856168/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-761710 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3004856168/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-761710 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3004856168/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-761710 ssh "findmnt -T" /mount1: exit status 1 (341.628107ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1019 16:29:32.360896    7254 retry.go:31] will retry after 429.348459ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-761710 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-761710 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-761710 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3004856168/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-761710 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3004856168/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-761710 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3004856168/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.60s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-761710
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-761710
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-761710
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (113.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-720185 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (1m53.093609925s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (113.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-720185 kubectl -- rollout status deployment/busybox: (3.279535393s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 kubectl -- exec busybox-7b57f96db7-g8hc2 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 kubectl -- exec busybox-7b57f96db7-hlx6q -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 kubectl -- exec busybox-7b57f96db7-tscch -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 kubectl -- exec busybox-7b57f96db7-g8hc2 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 kubectl -- exec busybox-7b57f96db7-hlx6q -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 kubectl -- exec busybox-7b57f96db7-tscch -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 kubectl -- exec busybox-7b57f96db7-g8hc2 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 kubectl -- exec busybox-7b57f96db7-hlx6q -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 kubectl -- exec busybox-7b57f96db7-tscch -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 kubectl -- exec busybox-7b57f96db7-g8hc2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 kubectl -- exec busybox-7b57f96db7-g8hc2 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 kubectl -- exec busybox-7b57f96db7-hlx6q -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 kubectl -- exec busybox-7b57f96db7-hlx6q -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 kubectl -- exec busybox-7b57f96db7-tscch -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 kubectl -- exec busybox-7b57f96db7-tscch -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (24.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-720185 node add --alsologtostderr -v 5: (24.092784855s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (24.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-720185 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (16.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 cp testdata/cp-test.txt ha-720185:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 ssh -n ha-720185 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 cp ha-720185:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile361581704/001/cp-test_ha-720185.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 ssh -n ha-720185 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 cp ha-720185:/home/docker/cp-test.txt ha-720185-m02:/home/docker/cp-test_ha-720185_ha-720185-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 ssh -n ha-720185 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 ssh -n ha-720185-m02 "sudo cat /home/docker/cp-test_ha-720185_ha-720185-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 cp ha-720185:/home/docker/cp-test.txt ha-720185-m03:/home/docker/cp-test_ha-720185_ha-720185-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 ssh -n ha-720185 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 ssh -n ha-720185-m03 "sudo cat /home/docker/cp-test_ha-720185_ha-720185-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 cp ha-720185:/home/docker/cp-test.txt ha-720185-m04:/home/docker/cp-test_ha-720185_ha-720185-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 ssh -n ha-720185 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 ssh -n ha-720185-m04 "sudo cat /home/docker/cp-test_ha-720185_ha-720185-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 cp testdata/cp-test.txt ha-720185-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 ssh -n ha-720185-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 cp ha-720185-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile361581704/001/cp-test_ha-720185-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 ssh -n ha-720185-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 cp ha-720185-m02:/home/docker/cp-test.txt ha-720185:/home/docker/cp-test_ha-720185-m02_ha-720185.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 ssh -n ha-720185-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 ssh -n ha-720185 "sudo cat /home/docker/cp-test_ha-720185-m02_ha-720185.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 cp ha-720185-m02:/home/docker/cp-test.txt ha-720185-m03:/home/docker/cp-test_ha-720185-m02_ha-720185-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 ssh -n ha-720185-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 ssh -n ha-720185-m03 "sudo cat /home/docker/cp-test_ha-720185-m02_ha-720185-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 cp ha-720185-m02:/home/docker/cp-test.txt ha-720185-m04:/home/docker/cp-test_ha-720185-m02_ha-720185-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 ssh -n ha-720185-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 ssh -n ha-720185-m04 "sudo cat /home/docker/cp-test_ha-720185-m02_ha-720185-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 cp testdata/cp-test.txt ha-720185-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 ssh -n ha-720185-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 cp ha-720185-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile361581704/001/cp-test_ha-720185-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 ssh -n ha-720185-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 cp ha-720185-m03:/home/docker/cp-test.txt ha-720185:/home/docker/cp-test_ha-720185-m03_ha-720185.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 ssh -n ha-720185-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 ssh -n ha-720185 "sudo cat /home/docker/cp-test_ha-720185-m03_ha-720185.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 cp ha-720185-m03:/home/docker/cp-test.txt ha-720185-m02:/home/docker/cp-test_ha-720185-m03_ha-720185-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 ssh -n ha-720185-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 ssh -n ha-720185-m02 "sudo cat /home/docker/cp-test_ha-720185-m03_ha-720185-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 cp ha-720185-m03:/home/docker/cp-test.txt ha-720185-m04:/home/docker/cp-test_ha-720185-m03_ha-720185-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 ssh -n ha-720185-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 ssh -n ha-720185-m04 "sudo cat /home/docker/cp-test_ha-720185-m03_ha-720185-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 cp testdata/cp-test.txt ha-720185-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 ssh -n ha-720185-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 cp ha-720185-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile361581704/001/cp-test_ha-720185-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 ssh -n ha-720185-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 cp ha-720185-m04:/home/docker/cp-test.txt ha-720185:/home/docker/cp-test_ha-720185-m04_ha-720185.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 ssh -n ha-720185-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 ssh -n ha-720185 "sudo cat /home/docker/cp-test_ha-720185-m04_ha-720185.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 cp ha-720185-m04:/home/docker/cp-test.txt ha-720185-m02:/home/docker/cp-test_ha-720185-m04_ha-720185-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 ssh -n ha-720185-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 ssh -n ha-720185-m02 "sudo cat /home/docker/cp-test_ha-720185-m04_ha-720185-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 cp ha-720185-m04:/home/docker/cp-test.txt ha-720185-m03:/home/docker/cp-test_ha-720185-m04_ha-720185-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 ssh -n ha-720185-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 ssh -n ha-720185-m03 "sudo cat /home/docker/cp-test_ha-720185-m04_ha-720185-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (16.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-720185 node stop m02 --alsologtostderr -v 5: (11.970892513s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-720185 status --alsologtostderr -v 5: exit status 7 (685.395695ms)

                                                
                                                
-- stdout --
	ha-720185
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-720185-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-720185-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-720185-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 16:38:32.890422   81871 out.go:360] Setting OutFile to fd 1 ...
	I1019 16:38:32.890526   81871 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:38:32.890534   81871 out.go:374] Setting ErrFile to fd 2...
	I1019 16:38:32.890538   81871 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:38:32.890755   81871 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-3708/.minikube/bin
	I1019 16:38:32.890922   81871 out.go:368] Setting JSON to false
	I1019 16:38:32.890944   81871 mustload.go:66] Loading cluster: ha-720185
	I1019 16:38:32.891079   81871 notify.go:221] Checking for updates...
	I1019 16:38:32.891376   81871 config.go:182] Loaded profile config "ha-720185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1019 16:38:32.891392   81871 status.go:174] checking status of ha-720185 ...
	I1019 16:38:32.891782   81871 cli_runner.go:164] Run: docker container inspect ha-720185 --format={{.State.Status}}
	I1019 16:38:32.911122   81871 status.go:371] ha-720185 host status = "Running" (err=<nil>)
	I1019 16:38:32.911162   81871 host.go:66] Checking if "ha-720185" exists ...
	I1019 16:38:32.911398   81871 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-720185
	I1019 16:38:32.931756   81871 host.go:66] Checking if "ha-720185" exists ...
	I1019 16:38:32.931994   81871 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 16:38:32.932035   81871 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-720185
	I1019 16:38:32.950585   81871 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21683-3708/.minikube/machines/ha-720185/id_rsa Username:docker}
	I1019 16:38:33.044616   81871 ssh_runner.go:195] Run: systemctl --version
	I1019 16:38:33.051041   81871 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 16:38:33.063667   81871 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 16:38:33.124450   81871 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-19 16:38:33.114870487 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 16:38:33.124988   81871 kubeconfig.go:125] found "ha-720185" server: "https://192.168.49.254:8443"
	I1019 16:38:33.125022   81871 api_server.go:166] Checking apiserver status ...
	I1019 16:38:33.125078   81871 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 16:38:33.137616   81871 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1411/cgroup
	W1019 16:38:33.145906   81871 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1411/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1019 16:38:33.145965   81871 ssh_runner.go:195] Run: ls
	I1019 16:38:33.149756   81871 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1019 16:38:33.155433   81871 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1019 16:38:33.155455   81871 status.go:463] ha-720185 apiserver status = Running (err=<nil>)
	I1019 16:38:33.155464   81871 status.go:176] ha-720185 status: &{Name:ha-720185 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1019 16:38:33.155480   81871 status.go:174] checking status of ha-720185-m02 ...
	I1019 16:38:33.155753   81871 cli_runner.go:164] Run: docker container inspect ha-720185-m02 --format={{.State.Status}}
	I1019 16:38:33.174537   81871 status.go:371] ha-720185-m02 host status = "Stopped" (err=<nil>)
	I1019 16:38:33.174562   81871 status.go:384] host is not running, skipping remaining checks
	I1019 16:38:33.174570   81871 status.go:176] ha-720185-m02 status: &{Name:ha-720185-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1019 16:38:33.174599   81871 status.go:174] checking status of ha-720185-m03 ...
	I1019 16:38:33.174894   81871 cli_runner.go:164] Run: docker container inspect ha-720185-m03 --format={{.State.Status}}
	I1019 16:38:33.193819   81871 status.go:371] ha-720185-m03 host status = "Running" (err=<nil>)
	I1019 16:38:33.193840   81871 host.go:66] Checking if "ha-720185-m03" exists ...
	I1019 16:38:33.194119   81871 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-720185-m03
	I1019 16:38:33.211716   81871 host.go:66] Checking if "ha-720185-m03" exists ...
	I1019 16:38:33.211975   81871 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 16:38:33.212010   81871 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-720185-m03
	I1019 16:38:33.230468   81871 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21683-3708/.minikube/machines/ha-720185-m03/id_rsa Username:docker}
	I1019 16:38:33.323500   81871 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 16:38:33.337251   81871 kubeconfig.go:125] found "ha-720185" server: "https://192.168.49.254:8443"
	I1019 16:38:33.337284   81871 api_server.go:166] Checking apiserver status ...
	I1019 16:38:33.337328   81871 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 16:38:33.349643   81871 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1334/cgroup
	W1019 16:38:33.358058   81871 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1334/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1019 16:38:33.358118   81871 ssh_runner.go:195] Run: ls
	I1019 16:38:33.362118   81871 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1019 16:38:33.366339   81871 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1019 16:38:33.366359   81871 status.go:463] ha-720185-m03 apiserver status = Running (err=<nil>)
	I1019 16:38:33.366366   81871 status.go:176] ha-720185-m03 status: &{Name:ha-720185-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1019 16:38:33.366380   81871 status.go:174] checking status of ha-720185-m04 ...
	I1019 16:38:33.366631   81871 cli_runner.go:164] Run: docker container inspect ha-720185-m04 --format={{.State.Status}}
	I1019 16:38:33.386455   81871 status.go:371] ha-720185-m04 host status = "Running" (err=<nil>)
	I1019 16:38:33.386481   81871 host.go:66] Checking if "ha-720185-m04" exists ...
	I1019 16:38:33.386722   81871 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-720185-m04
	I1019 16:38:33.403846   81871 host.go:66] Checking if "ha-720185-m04" exists ...
	I1019 16:38:33.404193   81871 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 16:38:33.404235   81871 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-720185-m04
	I1019 16:38:33.421613   81871 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/21683-3708/.minikube/machines/ha-720185-m04/id_rsa Username:docker}
	I1019 16:38:33.514137   81871 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 16:38:33.526498   81871 status.go:176] ha-720185-m04 status: &{Name:ha-720185-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (9.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 node start m02 --alsologtostderr -v 5
E1019 16:38:37.575235    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/addons-774290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-720185 node start m02 --alsologtostderr -v 5: (8.095791921s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (9.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (90.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 stop --alsologtostderr -v 5
E1019 16:39:09.058194    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/functional-761710/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:39:09.064624    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/functional-761710/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:39:09.075968    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/functional-761710/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:39:09.097605    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/functional-761710/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:39:09.139244    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/functional-761710/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:39:09.220684    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/functional-761710/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:39:09.382476    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/functional-761710/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:39:09.704636    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/functional-761710/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:39:10.346707    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/functional-761710/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:39:11.628565    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/functional-761710/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:39:14.191416    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/functional-761710/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:39:19.313151    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/functional-761710/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-720185 stop --alsologtostderr -v 5: (37.226766448s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 start --wait true --alsologtostderr -v 5
E1019 16:39:29.554872    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/functional-761710/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:39:50.036532    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/functional-761710/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-720185 start --wait true --alsologtostderr -v 5: (52.792130163s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (90.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (9.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-720185 node delete m03 --alsologtostderr -v 5: (8.360445972s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (9.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 stop --alsologtostderr -v 5
E1019 16:40:30.998197    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/functional-761710/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-720185 stop --alsologtostderr -v 5: (35.923830536s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-720185 status --alsologtostderr -v 5: exit status 7 (103.243205ms)

                                                
                                                
-- stdout --
	ha-720185
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-720185-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-720185-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 16:41:00.063809   98297 out.go:360] Setting OutFile to fd 1 ...
	I1019 16:41:00.064077   98297 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:41:00.064086   98297 out.go:374] Setting ErrFile to fd 2...
	I1019 16:41:00.064090   98297 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:41:00.064344   98297 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-3708/.minikube/bin
	I1019 16:41:00.064583   98297 out.go:368] Setting JSON to false
	I1019 16:41:00.064606   98297 mustload.go:66] Loading cluster: ha-720185
	I1019 16:41:00.064734   98297 notify.go:221] Checking for updates...
	I1019 16:41:00.065090   98297 config.go:182] Loaded profile config "ha-720185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1019 16:41:00.065109   98297 status.go:174] checking status of ha-720185 ...
	I1019 16:41:00.065583   98297 cli_runner.go:164] Run: docker container inspect ha-720185 --format={{.State.Status}}
	I1019 16:41:00.084850   98297 status.go:371] ha-720185 host status = "Stopped" (err=<nil>)
	I1019 16:41:00.084895   98297 status.go:384] host is not running, skipping remaining checks
	I1019 16:41:00.084908   98297 status.go:176] ha-720185 status: &{Name:ha-720185 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1019 16:41:00.084948   98297 status.go:174] checking status of ha-720185-m02 ...
	I1019 16:41:00.085251   98297 cli_runner.go:164] Run: docker container inspect ha-720185-m02 --format={{.State.Status}}
	I1019 16:41:00.102886   98297 status.go:371] ha-720185-m02 host status = "Stopped" (err=<nil>)
	I1019 16:41:00.102908   98297 status.go:384] host is not running, skipping remaining checks
	I1019 16:41:00.102914   98297 status.go:176] ha-720185-m02 status: &{Name:ha-720185-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1019 16:41:00.102930   98297 status.go:174] checking status of ha-720185-m04 ...
	I1019 16:41:00.103208   98297 cli_runner.go:164] Run: docker container inspect ha-720185-m04 --format={{.State.Status}}
	I1019 16:41:00.121133   98297 status.go:371] ha-720185-m04 host status = "Stopped" (err=<nil>)
	I1019 16:41:00.121158   98297 status.go:384] host is not running, skipping remaining checks
	I1019 16:41:00.121165   98297 status.go:176] ha-720185-m04 status: &{Name:ha-720185-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (51.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-720185 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (50.583569054s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (51.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (34.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 node add --control-plane --alsologtostderr -v 5
E1019 16:41:52.920961    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/functional-761710/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-720185 node add --control-plane --alsologtostderr -v 5: (33.975229059s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-720185 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (34.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.89s)

                                                
                                    
x
+
TestJSONOutput/start/Command (38.46s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-975202 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-975202 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (38.459350959s)
--- PASS: TestJSONOutput/start/Command (38.46s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.69s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-975202 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.69s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.59s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-975202 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.59s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.88s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-975202 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-975202 --output=json --user=testUser: (5.874871095s)
--- PASS: TestJSONOutput/stop/Command (5.88s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-501508 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-501508 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (65.914038ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"bbc204a4-1b6b-4aea-953b-1fd132280aac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-501508] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"889f33fe-d17a-41d4-9917-4210d774005e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21683"}}
	{"specversion":"1.0","id":"69910f56-455f-4034-a47a-ec27a563ce1c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"4e068834-56b5-4e9e-b4ef-d7745bfe064d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21683-3708/kubeconfig"}}
	{"specversion":"1.0","id":"09cd6786-fbc3-4c4b-83cd-e55f73da6de9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-3708/.minikube"}}
	{"specversion":"1.0","id":"5e88552d-13e7-48c1-a36b-f62c483773e9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"2bfe986e-9190-42f0-9081-fce3f82e90a7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a422a9d0-b474-4b4f-9ac4-794fd97efa39","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-501508" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-501508
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (34.2s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-222686 --network=
E1019 16:43:37.580734    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/addons-774290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-222686 --network=: (32.032734905s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-222686" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-222686
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-222686: (2.144815955s)
--- PASS: TestKicCustomNetwork/create_custom_network (34.20s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (22.96s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-808992 --network=bridge
E1019 16:44:09.058206    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/functional-761710/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-808992 --network=bridge: (20.951720784s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-808992" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-808992
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-808992: (1.992116048s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (22.96s)

                                                
                                    
x
+
TestKicExistingNetwork (23.77s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1019 16:44:23.857337    7254 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1019 16:44:23.875495    7254 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1019 16:44:23.875584    7254 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1019 16:44:23.875622    7254 cli_runner.go:164] Run: docker network inspect existing-network
W1019 16:44:23.893712    7254 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1019 16:44:23.893745    7254 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1019 16:44:23.893762    7254 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1019 16:44:23.893902    7254 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1019 16:44:23.911658    7254 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-006c23c4183a IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:5e:c9:92:a9:c4:7e} reservation:<nil>}
I1019 16:44:23.911994    7254 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001cfff00}
I1019 16:44:23.912014    7254 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1019 16:44:23.912078    7254 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1019 16:44:23.970592    7254 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-618958 --network=existing-network
E1019 16:44:36.762764    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/functional-761710/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-618958 --network=existing-network: (21.606641234s)
helpers_test.go:175: Cleaning up "existing-network-618958" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-618958
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-618958: (2.016661936s)
I1019 16:44:47.612214    7254 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (23.77s)

                                                
                                    
x
+
TestKicCustomSubnet (26.19s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-822107 --subnet=192.168.60.0/24
E1019 16:45:00.643945    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/addons-774290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-822107 --subnet=192.168.60.0/24: (23.985041562s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-822107 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-822107" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-822107
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-822107: (2.182451832s)
--- PASS: TestKicCustomSubnet (26.19s)

                                                
                                    
x
+
TestKicStaticIP (25.55s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-630853 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-630853 --static-ip=192.168.200.200: (23.249828395s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-630853 ip
helpers_test.go:175: Cleaning up "static-ip-630853" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-630853
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-630853: (2.167347803s)
--- PASS: TestKicStaticIP (25.55s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (48.49s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-930209 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-930209 --driver=docker  --container-runtime=containerd: (20.422014929s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-933373 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-933373 --driver=docker  --container-runtime=containerd: (22.563862008s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-930209
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-933373
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-933373" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-933373
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-933373: (1.986039775s)
helpers_test.go:175: Cleaning up "first-930209" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-930209
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-930209: (2.334544259s)
--- PASS: TestMinikubeProfile (48.49s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.71s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-754954 --memory=3072 --mount-string /tmp/TestMountStartserial3432261228/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-754954 --memory=3072 --mount-string /tmp/TestMountStartserial3432261228/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (4.710725846s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-754954 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.6s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-779748 --memory=3072 --mount-string /tmp/TestMountStartserial3432261228/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-779748 --memory=3072 --mount-string /tmp/TestMountStartserial3432261228/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (4.600028472s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.60s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-779748 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-754954 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-754954 --alsologtostderr -v=5: (1.713233295s)
--- PASS: TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-779748 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-779748
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-779748: (1.246890164s)
--- PASS: TestMountStart/serial/Stop (1.25s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.6s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-779748
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-779748: (6.599355848s)
--- PASS: TestMountStart/serial/RestartStopped (7.60s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-779748 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (64.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-920091 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-920091 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m4.370497332s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-920091 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (64.84s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-920091 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-920091 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-920091 -- rollout status deployment/busybox: (3.562299252s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-920091 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-920091 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-920091 -- exec busybox-7b57f96db7-crp77 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-920091 -- exec busybox-7b57f96db7-sz4zf -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-920091 -- exec busybox-7b57f96db7-crp77 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-920091 -- exec busybox-7b57f96db7-sz4zf -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-920091 -- exec busybox-7b57f96db7-crp77 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-920091 -- exec busybox-7b57f96db7-sz4zf -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.97s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-920091 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-920091 -- exec busybox-7b57f96db7-crp77 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-920091 -- exec busybox-7b57f96db7-crp77 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-920091 -- exec busybox-7b57f96db7-sz4zf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-920091 -- exec busybox-7b57f96db7-sz4zf -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.76s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (23.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-920091 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-920091 -v=5 --alsologtostderr: (22.994978849s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-920091 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (23.61s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-920091 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.65s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-920091 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-920091 cp testdata/cp-test.txt multinode-920091:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-920091 ssh -n multinode-920091 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-920091 cp multinode-920091:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2725762331/001/cp-test_multinode-920091.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-920091 ssh -n multinode-920091 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-920091 cp multinode-920091:/home/docker/cp-test.txt multinode-920091-m02:/home/docker/cp-test_multinode-920091_multinode-920091-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-920091 ssh -n multinode-920091 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-920091 ssh -n multinode-920091-m02 "sudo cat /home/docker/cp-test_multinode-920091_multinode-920091-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-920091 cp multinode-920091:/home/docker/cp-test.txt multinode-920091-m03:/home/docker/cp-test_multinode-920091_multinode-920091-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-920091 ssh -n multinode-920091 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-920091 ssh -n multinode-920091-m03 "sudo cat /home/docker/cp-test_multinode-920091_multinode-920091-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-920091 cp testdata/cp-test.txt multinode-920091-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-920091 ssh -n multinode-920091-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-920091 cp multinode-920091-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2725762331/001/cp-test_multinode-920091-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-920091 ssh -n multinode-920091-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-920091 cp multinode-920091-m02:/home/docker/cp-test.txt multinode-920091:/home/docker/cp-test_multinode-920091-m02_multinode-920091.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-920091 ssh -n multinode-920091-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-920091 ssh -n multinode-920091 "sudo cat /home/docker/cp-test_multinode-920091-m02_multinode-920091.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-920091 cp multinode-920091-m02:/home/docker/cp-test.txt multinode-920091-m03:/home/docker/cp-test_multinode-920091-m02_multinode-920091-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-920091 ssh -n multinode-920091-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-920091 ssh -n multinode-920091-m03 "sudo cat /home/docker/cp-test_multinode-920091-m02_multinode-920091-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-920091 cp testdata/cp-test.txt multinode-920091-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-920091 ssh -n multinode-920091-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-920091 cp multinode-920091-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2725762331/001/cp-test_multinode-920091-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-920091 ssh -n multinode-920091-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-920091 cp multinode-920091-m03:/home/docker/cp-test.txt multinode-920091:/home/docker/cp-test_multinode-920091-m03_multinode-920091.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-920091 ssh -n multinode-920091-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-920091 ssh -n multinode-920091 "sudo cat /home/docker/cp-test_multinode-920091-m03_multinode-920091.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-920091 cp multinode-920091-m03:/home/docker/cp-test.txt multinode-920091-m02:/home/docker/cp-test_multinode-920091-m03_multinode-920091-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-920091 ssh -n multinode-920091-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-920091 ssh -n multinode-920091-m02 "sudo cat /home/docker/cp-test_multinode-920091-m03_multinode-920091-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.46s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-920091 node stop m03
E1019 16:48:37.572597    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/addons-774290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-920091 node stop m03: (1.257821657s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-920091 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-920091 status: exit status 7 (486.597264ms)

                                                
                                                
-- stdout --
	multinode-920091
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-920091-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-920091-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-920091 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-920091 status --alsologtostderr: exit status 7 (479.536986ms)

                                                
                                                
-- stdout --
	multinode-920091
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-920091-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-920091-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 16:48:38.844431  160800 out.go:360] Setting OutFile to fd 1 ...
	I1019 16:48:38.844686  160800 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:48:38.844694  160800 out.go:374] Setting ErrFile to fd 2...
	I1019 16:48:38.844698  160800 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:48:38.844882  160800 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-3708/.minikube/bin
	I1019 16:48:38.845067  160800 out.go:368] Setting JSON to false
	I1019 16:48:38.845089  160800 mustload.go:66] Loading cluster: multinode-920091
	I1019 16:48:38.845206  160800 notify.go:221] Checking for updates...
	I1019 16:48:38.845470  160800 config.go:182] Loaded profile config "multinode-920091": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1019 16:48:38.845482  160800 status.go:174] checking status of multinode-920091 ...
	I1019 16:48:38.845858  160800 cli_runner.go:164] Run: docker container inspect multinode-920091 --format={{.State.Status}}
	I1019 16:48:38.864070  160800 status.go:371] multinode-920091 host status = "Running" (err=<nil>)
	I1019 16:48:38.864113  160800 host.go:66] Checking if "multinode-920091" exists ...
	I1019 16:48:38.864443  160800 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-920091
	I1019 16:48:38.881416  160800 host.go:66] Checking if "multinode-920091" exists ...
	I1019 16:48:38.881674  160800 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 16:48:38.881717  160800 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-920091
	I1019 16:48:38.900139  160800 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/21683-3708/.minikube/machines/multinode-920091/id_rsa Username:docker}
	I1019 16:48:38.993532  160800 ssh_runner.go:195] Run: systemctl --version
	I1019 16:48:38.999799  160800 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 16:48:39.012207  160800 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 16:48:39.067949  160800 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:65 SystemTime:2025-10-19 16:48:39.058140288 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 16:48:39.068472  160800 kubeconfig.go:125] found "multinode-920091" server: "https://192.168.67.2:8443"
	I1019 16:48:39.068502  160800 api_server.go:166] Checking apiserver status ...
	I1019 16:48:39.068532  160800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 16:48:39.081028  160800 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1387/cgroup
	W1019 16:48:39.089567  160800 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1387/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1019 16:48:39.089630  160800 ssh_runner.go:195] Run: ls
	I1019 16:48:39.093413  160800 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1019 16:48:39.097617  160800 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1019 16:48:39.097637  160800 status.go:463] multinode-920091 apiserver status = Running (err=<nil>)
	I1019 16:48:39.097646  160800 status.go:176] multinode-920091 status: &{Name:multinode-920091 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1019 16:48:39.097660  160800 status.go:174] checking status of multinode-920091-m02 ...
	I1019 16:48:39.097875  160800 cli_runner.go:164] Run: docker container inspect multinode-920091-m02 --format={{.State.Status}}
	I1019 16:48:39.116671  160800 status.go:371] multinode-920091-m02 host status = "Running" (err=<nil>)
	I1019 16:48:39.116701  160800 host.go:66] Checking if "multinode-920091-m02" exists ...
	I1019 16:48:39.116940  160800 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-920091-m02
	I1019 16:48:39.135435  160800 host.go:66] Checking if "multinode-920091-m02" exists ...
	I1019 16:48:39.135686  160800 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 16:48:39.135721  160800 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-920091-m02
	I1019 16:48:39.153303  160800 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/21683-3708/.minikube/machines/multinode-920091-m02/id_rsa Username:docker}
	I1019 16:48:39.246190  160800 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 16:48:39.258067  160800 status.go:176] multinode-920091-m02 status: &{Name:multinode-920091-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1019 16:48:39.258104  160800 status.go:174] checking status of multinode-920091-m03 ...
	I1019 16:48:39.258368  160800 cli_runner.go:164] Run: docker container inspect multinode-920091-m03 --format={{.State.Status}}
	I1019 16:48:39.275769  160800 status.go:371] multinode-920091-m03 host status = "Stopped" (err=<nil>)
	I1019 16:48:39.275793  160800 status.go:384] host is not running, skipping remaining checks
	I1019 16:48:39.275801  160800 status.go:176] multinode-920091-m03 status: &{Name:multinode-920091-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.22s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (6.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-920091 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-920091 node start m03 -v=5 --alsologtostderr: (6.26933404s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-920091 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (6.96s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (72.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-920091
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-920091
E1019 16:49:09.058782    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/functional-761710/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-920091: (24.975818716s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-920091 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-920091 --wait=true -v=5 --alsologtostderr: (47.789292597s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-920091
--- PASS: TestMultiNode/serial/RestartKeepsNodes (72.86s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-920091 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-920091 node delete m03: (4.631179931s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-920091 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.23s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-920091 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-920091 stop: (23.800858409s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-920091 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-920091 status: exit status 7 (88.149943ms)

                                                
                                                
-- stdout --
	multinode-920091
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-920091-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-920091 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-920091 status --alsologtostderr: exit status 7 (83.594683ms)

                                                
                                                
-- stdout --
	multinode-920091
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-920091-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 16:50:28.270624  170572 out.go:360] Setting OutFile to fd 1 ...
	I1019 16:50:28.270927  170572 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:50:28.270939  170572 out.go:374] Setting ErrFile to fd 2...
	I1019 16:50:28.270945  170572 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:50:28.271172  170572 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-3708/.minikube/bin
	I1019 16:50:28.271380  170572 out.go:368] Setting JSON to false
	I1019 16:50:28.271409  170572 mustload.go:66] Loading cluster: multinode-920091
	I1019 16:50:28.271522  170572 notify.go:221] Checking for updates...
	I1019 16:50:28.271820  170572 config.go:182] Loaded profile config "multinode-920091": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1019 16:50:28.271837  170572 status.go:174] checking status of multinode-920091 ...
	I1019 16:50:28.272320  170572 cli_runner.go:164] Run: docker container inspect multinode-920091 --format={{.State.Status}}
	I1019 16:50:28.291084  170572 status.go:371] multinode-920091 host status = "Stopped" (err=<nil>)
	I1019 16:50:28.291108  170572 status.go:384] host is not running, skipping remaining checks
	I1019 16:50:28.291116  170572 status.go:176] multinode-920091 status: &{Name:multinode-920091 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1019 16:50:28.291149  170572 status.go:174] checking status of multinode-920091-m02 ...
	I1019 16:50:28.291409  170572 cli_runner.go:164] Run: docker container inspect multinode-920091-m02 --format={{.State.Status}}
	I1019 16:50:28.308814  170572 status.go:371] multinode-920091-m02 host status = "Stopped" (err=<nil>)
	I1019 16:50:28.308836  170572 status.go:384] host is not running, skipping remaining checks
	I1019 16:50:28.308843  170572 status.go:176] multinode-920091-m02 status: &{Name:multinode-920091-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.97s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (44.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-920091 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-920091 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (43.783595644s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-920091 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (44.37s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (26.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-920091
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-920091-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-920091-m02 --driver=docker  --container-runtime=containerd: exit status 14 (64.64251ms)

                                                
                                                
-- stdout --
	* [multinode-920091-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-3708/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-3708/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-920091-m02' is duplicated with machine name 'multinode-920091-m02' in profile 'multinode-920091'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-920091-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-920091-m03 --driver=docker  --container-runtime=containerd: (24.028453748s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-920091
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-920091: exit status 80 (285.574276ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-920091 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-920091-m03 already exists in multinode-920091-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-920091-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-920091-m03: (2.382664675s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (26.81s)

                                                
                                    
x
+
TestPreload (111.67s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-281764 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-281764 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0: (46.382037419s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-281764 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-281764 image pull gcr.io/k8s-minikube/busybox: (2.320456278s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-281764
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-281764: (5.677240267s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-281764 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-281764 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (54.618825332s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-281764 image list
helpers_test.go:175: Cleaning up "test-preload-281764" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-281764
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-281764: (2.451578627s)
--- PASS: TestPreload (111.67s)

                                                
                                    
x
+
TestScheduledStopUnix (98.38s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-088529 --memory=3072 --driver=docker  --container-runtime=containerd
E1019 16:53:37.573169    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/addons-774290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-088529 --memory=3072 --driver=docker  --container-runtime=containerd: (22.045845836s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-088529 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-088529 -n scheduled-stop-088529
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-088529 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1019 16:53:57.823839    7254 retry.go:31] will retry after 147.008µs: open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/scheduled-stop-088529/pid: no such file or directory
I1019 16:53:57.825003    7254 retry.go:31] will retry after 80.689µs: open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/scheduled-stop-088529/pid: no such file or directory
I1019 16:53:57.826089    7254 retry.go:31] will retry after 128.495µs: open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/scheduled-stop-088529/pid: no such file or directory
I1019 16:53:57.827233    7254 retry.go:31] will retry after 396.457µs: open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/scheduled-stop-088529/pid: no such file or directory
I1019 16:53:57.828360    7254 retry.go:31] will retry after 321.08µs: open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/scheduled-stop-088529/pid: no such file or directory
I1019 16:53:57.829499    7254 retry.go:31] will retry after 694.151µs: open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/scheduled-stop-088529/pid: no such file or directory
I1019 16:53:57.830626    7254 retry.go:31] will retry after 1.425157ms: open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/scheduled-stop-088529/pid: no such file or directory
I1019 16:53:57.832872    7254 retry.go:31] will retry after 1.12366ms: open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/scheduled-stop-088529/pid: no such file or directory
I1019 16:53:57.834089    7254 retry.go:31] will retry after 2.735545ms: open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/scheduled-stop-088529/pid: no such file or directory
I1019 16:53:57.837290    7254 retry.go:31] will retry after 3.142629ms: open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/scheduled-stop-088529/pid: no such file or directory
I1019 16:53:57.841489    7254 retry.go:31] will retry after 6.242454ms: open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/scheduled-stop-088529/pid: no such file or directory
I1019 16:53:57.848721    7254 retry.go:31] will retry after 10.689909ms: open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/scheduled-stop-088529/pid: no such file or directory
I1019 16:53:57.859951    7254 retry.go:31] will retry after 12.927602ms: open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/scheduled-stop-088529/pid: no such file or directory
I1019 16:53:57.873222    7254 retry.go:31] will retry after 19.121464ms: open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/scheduled-stop-088529/pid: no such file or directory
I1019 16:53:57.892464    7254 retry.go:31] will retry after 42.792147ms: open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/scheduled-stop-088529/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-088529 --cancel-scheduled
E1019 16:54:09.058676    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/functional-761710/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-088529 -n scheduled-stop-088529
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-088529
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-088529 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-088529
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-088529: exit status 7 (68.382342ms)

                                                
                                                
-- stdout --
	scheduled-stop-088529
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-088529 -n scheduled-stop-088529
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-088529 -n scheduled-stop-088529: exit status 7 (66.670827ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-088529" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-088529
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-088529: (4.958531617s)
--- PASS: TestScheduledStopUnix (98.38s)

                                                
                                    
x
+
TestInsufficientStorage (9.18s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-909480 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-909480 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (6.699637063s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"aa3481a4-9b5d-40a2-a5f9-e604b5ee9823","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-909480] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8709ca87-20cc-4f2c-bebf-3bd0efc2caea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21683"}}
	{"specversion":"1.0","id":"4c69d07c-969d-4d5d-9683-772bbc659a0b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"80758ef4-ff47-4309-b12c-f417c9d236b3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21683-3708/kubeconfig"}}
	{"specversion":"1.0","id":"4706730f-3e8a-4354-9905-87886d32facd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-3708/.minikube"}}
	{"specversion":"1.0","id":"808f883a-38ff-4bee-ac51-ba19784f76d7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"c7f4dd69-2ba6-4863-a497-38fccff646b1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a475af08-2cbb-4b77-9fe0-22d571ec826d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"9922c956-a638-4a61-a90b-12379de62423","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"56304068-c0b6-4704-b10c-bbeb10dcc055","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"f4049b6a-8e32-4835-9df3-a07e7103bb15","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"4501d61c-79cd-4bdd-a2c5-a67e6e2e0826","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-909480\" primary control-plane node in \"insufficient-storage-909480\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"ca9cf62f-6df1-45d7-89aa-a7e99b82771e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1760609789-21757 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"6e94a9d5-2b92-482c-a033-03917b501197","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"95a56ed9-4054-4a83-8c18-1071a85a82c5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-909480 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-909480 --output=json --layout=cluster: exit status 7 (280.346325ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-909480","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-909480","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1019 16:55:20.711614  192368 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-909480" does not appear in /home/jenkins/minikube-integration/21683-3708/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-909480 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-909480 --output=json --layout=cluster: exit status 7 (278.342455ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-909480","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-909480","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1019 16:55:20.991011  192480 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-909480" does not appear in /home/jenkins/minikube-integration/21683-3708/kubeconfig
	E1019 16:55:21.001425  192480 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/insufficient-storage-909480/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-909480" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-909480
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-909480: (1.922423184s)
--- PASS: TestInsufficientStorage (9.18s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (75.01s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.1172411919 start -p running-upgrade-218706 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.1172411919 start -p running-upgrade-218706 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (47.22764195s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-218706 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-218706 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (22.69991654s)
helpers_test.go:175: Cleaning up "running-upgrade-218706" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-218706
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-218706: (2.498631158s)
--- PASS: TestRunningBinaryUpgrade (75.01s)

                                                
                                    
x
+
TestMissingContainerUpgrade (104.43s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.1236771868 start -p missing-upgrade-260606 --memory=3072 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.1236771868 start -p missing-upgrade-260606 --memory=3072 --driver=docker  --container-runtime=containerd: (42.77392191s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-260606
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-260606
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-260606 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-260606 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (56.422104909s)
helpers_test.go:175: Cleaning up "missing-upgrade-260606" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-260606
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-260606: (2.005252436s)
--- PASS: TestMissingContainerUpgrade (104.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-767545 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-767545 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 14 (84.57357ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-767545] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-3708/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-3708/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (31.91s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-767545 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E1019 16:55:32.130510    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/functional-761710/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-767545 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (31.441320176s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-767545 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (31.91s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (30.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-767545 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-767545 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (25.009771861s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-767545 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-767545 status -o json: exit status 2 (422.467882ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-767545","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-767545
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-767545: (4.769236578s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (30.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-102079 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-102079 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (168.532365ms)

                                                
                                                
-- stdout --
	* [false-102079] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-3708/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-3708/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 16:56:00.717041  205479 out.go:360] Setting OutFile to fd 1 ...
	I1019 16:56:00.717328  205479 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:56:00.717340  205479 out.go:374] Setting ErrFile to fd 2...
	I1019 16:56:00.717358  205479 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:56:00.717568  205479 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-3708/.minikube/bin
	I1019 16:56:00.718092  205479 out.go:368] Setting JSON to false
	I1019 16:56:00.719215  205479 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":2303,"bootTime":1760890658,"procs":283,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1019 16:56:00.719314  205479 start.go:143] virtualization: kvm guest
	I1019 16:56:00.721516  205479 out.go:179] * [false-102079] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1019 16:56:00.722986  205479 notify.go:221] Checking for updates...
	I1019 16:56:00.723032  205479 out.go:179]   - MINIKUBE_LOCATION=21683
	I1019 16:56:00.724572  205479 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 16:56:00.725786  205479 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-3708/kubeconfig
	I1019 16:56:00.727187  205479 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-3708/.minikube
	I1019 16:56:00.728263  205479 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1019 16:56:00.729620  205479 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 16:56:00.731590  205479 config.go:182] Loaded profile config "NoKubernetes-767545": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v0.0.0
	I1019 16:56:00.731777  205479 config.go:182] Loaded profile config "force-systemd-flag-804392": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1019 16:56:00.731906  205479 config.go:182] Loaded profile config "offline-containerd-755604": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1019 16:56:00.732022  205479 driver.go:422] Setting default libvirt URI to qemu:///system
	I1019 16:56:00.758348  205479 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1019 16:56:00.758448  205479 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 16:56:00.827398  205479 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:54 OomKillDisable:false NGoroutines:67 SystemTime:2025-10-19 16:56:00.81534095 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 16:56:00.827508  205479 docker.go:319] overlay module found
	I1019 16:56:00.829275  205479 out.go:179] * Using the docker driver based on user configuration
	I1019 16:56:00.830608  205479 start.go:309] selected driver: docker
	I1019 16:56:00.830626  205479 start.go:930] validating driver "docker" against <nil>
	I1019 16:56:00.830639  205479 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 16:56:00.832516  205479 out.go:203] 
	W1019 16:56:00.833682  205479 out.go:285] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1019 16:56:00.834848  205479 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-102079 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-102079

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-102079

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-102079

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-102079

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-102079

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-102079

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-102079

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-102079

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-102079

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-102079

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-102079"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-102079"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-102079"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-102079

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-102079"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-102079"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-102079" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-102079" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-102079" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-102079" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-102079" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-102079" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-102079" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-102079" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-102079"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-102079"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-102079"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-102079"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-102079"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-102079" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-102079" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-102079" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-102079"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-102079"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-102079"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-102079"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-102079"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21683-3708/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 19 Oct 2025 16:55:53 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: NoKubernetes-767545
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21683-3708/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 19 Oct 2025 16:56:01 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: offline-containerd-755604
contexts:
- context:
cluster: NoKubernetes-767545
extensions:
- extension:
last-update: Sun, 19 Oct 2025 16:55:53 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: NoKubernetes-767545
name: NoKubernetes-767545
- context:
cluster: offline-containerd-755604
extensions:
- extension:
last-update: Sun, 19 Oct 2025 16:56:01 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: offline-containerd-755604
name: offline-containerd-755604
current-context: offline-containerd-755604
kind: Config
users:
- name: NoKubernetes-767545
user:
client-certificate: /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/NoKubernetes-767545/client.crt
client-key: /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/NoKubernetes-767545/client.key
- name: offline-containerd-755604
user:
client-certificate: /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/offline-containerd-755604/client.crt
client-key: /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/offline-containerd-755604/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-102079

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-102079"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-102079"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-102079"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-102079"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-102079"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-102079"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-102079"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-102079"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-102079"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-102079"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-102079"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-102079"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-102079"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-102079"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-102079"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-102079"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-102079"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-102079"

                                                
                                                
----------------------- debugLogs end: false-102079 [took: 3.385617319s] --------------------------------
helpers_test.go:175: Cleaning up "false-102079" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-102079
--- PASS: TestNetworkPlugins/group/false (3.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-767545 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-767545 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (9.312905028s)
--- PASS: TestNoKubernetes/serial/Start (9.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-767545 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-767545 "sudo systemctl is-active --quiet service kubelet": exit status 1 (323.009259ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.57s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-767545
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-767545: (2.569897796s)
--- PASS: TestNoKubernetes/serial/Stop (2.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.57s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-767545 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-767545 --driver=docker  --container-runtime=containerd: (8.569897241s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-767545 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-767545 "sudo systemctl is-active --quiet service kubelet": exit status 1 (294.881923ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.00s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (51.47s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.3560972767 start -p stopped-upgrade-706818 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.3560972767 start -p stopped-upgrade-706818 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (22.620255988s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.3560972767 -p stopped-upgrade-706818 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.3560972767 -p stopped-upgrade-706818 stop: (1.764435849s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-706818 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-706818 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (27.081470634s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (51.47s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.36s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-706818
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-706818: (1.362076845s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.36s)

                                                
                                    
x
+
TestPause/serial/Start (41.07s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-122168 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-122168 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (41.068346369s)
--- PASS: TestPause/serial/Start (41.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (41.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-102079 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
E1019 16:58:37.572897    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/addons-774290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-102079 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (41.558024339s)
--- PASS: TestNetworkPlugins/group/auto/Start (41.56s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (5.58s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-122168 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1019 16:59:09.058344    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/functional-761710/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-122168 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (5.567561576s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (5.58s)

                                                
                                    
x
+
TestPause/serial/Pause (0.67s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-122168 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.67s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.31s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-122168 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-122168 --output=json --layout=cluster: exit status 2 (311.53845ms)

                                                
                                                
-- stdout --
	{"Name":"pause-122168","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-122168","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.31s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.63s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-122168 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-102079 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.8s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-122168 --alsologtostderr -v=5
I1019 16:59:11.566155    7254 config.go:182] Loaded profile config "auto-102079": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestPause/serial/PauseAgain (0.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-102079 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-vnbf2" [23056162-866b-487e-a903-f3c1b97a6623] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-vnbf2" [23056162-866b-487e-a903-f3c1b97a6623] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.003699308s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.27s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.86s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-122168 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-122168 --alsologtostderr -v=5: (2.860634192s)
--- PASS: TestPause/serial/DeletePaused (2.86s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (29.04s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (28.968663554s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-122168
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-122168: exit status 1 (21.272453ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-122168: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (29.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-102079 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-102079 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-102079 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (39.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-102079 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-102079 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (39.302929849s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (39.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (47.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-102079 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-102079 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (47.104066994s)
--- PASS: TestNetworkPlugins/group/calico/Start (47.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (54.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-102079 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-102079 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (54.021201577s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (54.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-kzrpj" [d8688c59-44b7-4cec-889f-b027c5e09975] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004899101s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-102079 "pgrep -a kubelet"
I1019 17:00:22.592137    7254 config.go:182] Loaded profile config "kindnet-102079": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-102079 replace --force -f testdata/netcat-deployment.yaml
I1019 17:00:22.884423    7254 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-hnm4h" [b0f62bc7-acbc-4f7a-98c9-f110b73ffd37] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-hnm4h" [b0f62bc7-acbc-4f7a-98c9-f110b73ffd37] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.004429877s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-26mb9" [3effca25-2fcd-470b-82a0-e22994db1bdd] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004213016s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-102079 "pgrep -a kubelet"
I1019 17:00:34.787893    7254 config.go:182] Loaded profile config "calico-102079": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (16.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-102079 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-hgjhf" [6f6968a9-f28f-491d-9a84-9bf10b3aff35] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-hgjhf" [6f6968a9-f28f-491d-9a84-9bf10b3aff35] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 16.003872952s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (16.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-102079 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-102079 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-102079 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-102079 "pgrep -a kubelet"
I1019 17:00:38.603025    7254 config.go:182] Loaded profile config "custom-flannel-102079": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (8.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-102079 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-8ms8q" [f97a085c-c244-47c6-9539-f0fc0ca013de] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-8ms8q" [f97a085c-c244-47c6-9539-f0fc0ca013de] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 8.003931384s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (8.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-102079 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-102079 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-102079 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-102079 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-102079 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-102079 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (65.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-102079 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-102079 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m5.698735778s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (65.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (44.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-102079 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-102079 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (44.440558091s)
--- PASS: TestNetworkPlugins/group/flannel/Start (44.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (37.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-102079 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
E1019 17:01:40.645612    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/addons-774290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-102079 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (37.177360765s)
--- PASS: TestNetworkPlugins/group/bridge/Start (37.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-102079 "pgrep -a kubelet"
I1019 17:01:52.095684    7254 config.go:182] Loaded profile config "bridge-102079": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (14.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-102079 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-mk5lv" [72eb95dd-3f4d-40a1-868a-956c515418e2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-mk5lv" [72eb95dd-3f4d-40a1-868a-956c515418e2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 14.004551169s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (14.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-wzxhr" [3aa03ab8-5b84-48f3-b739-f08dad01e8c5] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003783068s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-102079 "pgrep -a kubelet"
I1019 17:01:59.662713    7254 config.go:182] Loaded profile config "flannel-102079": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (30.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-102079 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-5zbb2" [d905e072-d5cb-444e-b45b-a7c7d153d766] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-5zbb2" [d905e072-d5cb-444e-b45b-a7c7d153d766] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 30.004069115s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (30.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-102079 "pgrep -a kubelet"
I1019 17:02:00.376697    7254 config.go:182] Loaded profile config "enable-default-cni-102079": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (22.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-102079 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-hls68" [046fcdd3-97d9-4d7d-9662-3f66aff0296e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-hls68" [046fcdd3-97d9-4d7d-9662-3f66aff0296e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 22.003595211s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (22.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-102079 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-102079 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-102079 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-102079 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-102079 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-102079 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (51.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-309999 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-309999 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (51.431665453s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (51.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-102079 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-102079 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-102079 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)
E1019 17:05:17.502823    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/kindnet-102079/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:05:18.784991    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/kindnet-102079/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (68.63s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-189367 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-189367 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (1m8.625295895s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (68.63s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (44.57s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-493778 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-493778 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (44.57374038s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (44.57s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-309999 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [fcd2e85e-7890-42c6-a3cc-3a7111788f15] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [fcd2e85e-7890-42c6-a3cc-3a7111788f15] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.004232126s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-309999 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.93s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-309999 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-309999 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.93s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-309999 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-309999 --alsologtostderr -v=3: (12.293130783s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-493778 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [bc29a18b-0a70-4a4e-842e-92b375dad97a] Pending
helpers_test.go:352: "busybox" [bc29a18b-0a70-4a4e-842e-92b375dad97a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1019 17:03:37.573167    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/addons-774290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [bc29a18b-0a70-4a4e-842e-92b375dad97a] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.004366642s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-493778 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-309999 -n old-k8s-version-309999
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-309999 -n old-k8s-version-309999: exit status 7 (66.455406ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-309999 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (50.82s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-309999 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-309999 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (50.46287359s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-309999 -n old-k8s-version-309999
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (50.82s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.89s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-493778 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-493778 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-493778 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-493778 --alsologtostderr -v=3: (12.183852653s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-189367 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [2d2549ee-63e9-46ed-9654-f720257d1065] Pending
helpers_test.go:352: "busybox" [2d2549ee-63e9-46ed-9654-f720257d1065] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [2d2549ee-63e9-46ed-9654-f720257d1065] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003471289s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-189367 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-493778 -n embed-certs-493778
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-493778 -n embed-certs-493778: exit status 7 (71.857397ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-493778 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (51.48s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-493778 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-493778 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (51.140223764s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-493778 -n embed-certs-493778
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (51.48s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.89s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-189367 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-189367 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.89s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.86s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-189367 --alsologtostderr -v=3
E1019 17:04:09.059256    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/functional-761710/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:04:11.820503    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/auto-102079/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:04:11.826915    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/auto-102079/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:04:11.838343    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/auto-102079/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:04:11.859752    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/auto-102079/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:04:11.901274    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/auto-102079/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:04:11.982730    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/auto-102079/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:04:12.144559    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/auto-102079/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:04:12.466674    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/auto-102079/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:04:13.108248    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/auto-102079/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:04:14.390299    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/auto-102079/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-189367 --alsologtostderr -v=3: (12.858426437s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-189367 -n no-preload-189367
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-189367 -n no-preload-189367: exit status 7 (71.194194ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-189367 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (44.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-189367 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
E1019 17:04:16.952000    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/auto-102079/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:04:22.073346    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/auto-102079/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-189367 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (43.780689791s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-189367 -n no-preload-189367
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (44.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-mzpqk" [9e61af67-709a-4770-b92c-ebc517ae3cf5] Running
E1019 17:04:32.315177    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/auto-102079/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004038732s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-mzpqk" [9e61af67-709a-4770-b92c-ebc517ae3cf5] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00397887s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-309999 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-309999 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.73s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-309999 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-309999 -n old-k8s-version-309999
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-309999 -n old-k8s-version-309999: exit status 2 (307.323459ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-309999 -n old-k8s-version-309999
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-309999 -n old-k8s-version-309999: exit status 2 (308.107132ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-309999 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-309999 -n old-k8s-version-309999
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-309999 -n old-k8s-version-309999
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.73s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (41.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-884246 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-884246 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (41.119846586s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (41.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-k2bdg" [052481cd-ba28-42e9-bcff-642a31be6389] Running
E1019 17:04:52.796960    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/auto-102079/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003006649s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-k2bdg" [052481cd-ba28-42e9-bcff-642a31be6389] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003623532s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-493778 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-866pb" [33542229-4f5e-4848-81f6-07e27dc65d53] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004527368s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-493778 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.77s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-493778 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-493778 -n embed-certs-493778
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-493778 -n embed-certs-493778: exit status 2 (323.828679ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-493778 -n embed-certs-493778
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-493778 -n embed-certs-493778: exit status 2 (326.737375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-493778 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-493778 -n embed-certs-493778
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-493778 -n embed-certs-493778
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.77s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-866pb" [33542229-4f5e-4848-81f6-07e27dc65d53] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.048388138s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-189367 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (29.9s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-104383 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-104383 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (29.903863977s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (29.90s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.5s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-189367 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.50s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-189367 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p no-preload-189367 --alsologtostderr -v=1: (1.092095025s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-189367 -n no-preload-189367
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-189367 -n no-preload-189367: exit status 2 (349.973109ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-189367 -n no-preload-189367
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-189367 -n no-preload-189367: exit status 2 (353.383316ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-189367 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-189367 -n no-preload-189367
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-189367 -n no-preload-189367
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-884246 create -f testdata/busybox.yaml
E1019 17:05:28.550845    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/calico-102079/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:05:28.593277    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/calico-102079/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:05:28.674745    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/calico-102079/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [b50121d9-4ef8-45f6-ac5a-d7ee835285b2] Pending
E1019 17:05:28.836329    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/calico-102079/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:05:29.157656    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/calico-102079/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [b50121d9-4ef8-45f6-ac5a-d7ee835285b2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1019 17:05:29.799735    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/calico-102079/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:05:31.081187    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/calico-102079/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [b50121d9-4ef8-45f6-ac5a-d7ee835285b2] Running
E1019 17:05:33.642640    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/calico-102079/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:05:33.758972    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/auto-102079/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004436024s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-884246 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.79s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-104383 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1019 17:05:36.710127    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/kindnet-102079/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.79s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.85s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-884246 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-884246 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.85s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-104383 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-104383 --alsologtostderr -v=3: (1.356875818s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-884246 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-884246 --alsologtostderr -v=3: (12.012052125s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-104383 -n newest-cni-104383
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-104383 -n newest-cni-104383: exit status 7 (64.765343ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-104383 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (10.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-104383 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
E1019 17:05:38.764528    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/calico-102079/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:05:38.803468    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/custom-flannel-102079/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:05:38.810152    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/custom-flannel-102079/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:05:38.821632    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/custom-flannel-102079/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:05:38.843101    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/custom-flannel-102079/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:05:38.884498    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/custom-flannel-102079/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:05:38.966065    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/custom-flannel-102079/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:05:39.127753    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/custom-flannel-102079/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:05:39.449661    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/custom-flannel-102079/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:05:40.091709    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/custom-flannel-102079/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:05:41.373396    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/custom-flannel-102079/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:05:43.935641    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/custom-flannel-102079/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-104383 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (10.069690114s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-104383 -n newest-cni-104383
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (10.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-104383 image list --format=json
E1019 17:05:49.006592    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/calico-102079/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:05:49.057440    7254 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/custom-flannel-102079/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.66s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-104383 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-104383 -n newest-cni-104383
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-104383 -n newest-cni-104383: exit status 2 (334.197508ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-104383 -n newest-cni-104383
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-104383 -n newest-cni-104383: exit status 2 (329.720951ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-104383 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-104383 -n newest-cni-104383
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-104383 -n newest-cni-104383
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.66s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-884246 -n default-k8s-diff-port-884246
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-884246 -n default-k8s-diff-port-884246: exit status 7 (72.009192ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-884246 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (45.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-884246 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-884246 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (45.047550178s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-884246 -n default-k8s-diff-port-884246
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (45.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-j2h68" [18dc929a-f7f4-46fb-b7d4-04b7484e82d0] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003025797s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-j2h68" [18dc929a-f7f4-46fb-b7d4-04b7484e82d0] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003184952s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-884246 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-884246 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.71s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-884246 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-884246 -n default-k8s-diff-port-884246
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-884246 -n default-k8s-diff-port-884246: exit status 2 (308.167477ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-884246 -n default-k8s-diff-port-884246
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-884246 -n default-k8s-diff-port-884246: exit status 2 (306.906332ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-884246 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-884246 -n default-k8s-diff-port-884246
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-884246 -n default-k8s-diff-port-884246
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.71s)

                                                
                                    

Test skip (25/332)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-102079 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-102079

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-102079

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-102079

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-102079

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-102079

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-102079

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-102079

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-102079

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-102079

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-102079

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-102079"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-102079"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-102079"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-102079

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-102079"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-102079"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-102079" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-102079" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-102079" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-102079" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-102079" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-102079" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-102079" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-102079" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-102079"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-102079"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-102079"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-102079"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-102079"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-102079" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-102079" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-102079" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-102079"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-102079"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-102079"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-102079"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-102079"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21683-3708/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 19 Oct 2025 16:55:53 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: NoKubernetes-767545
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21683-3708/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 19 Oct 2025 16:55:58 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: force-systemd-flag-804392
contexts:
- context:
cluster: NoKubernetes-767545
extensions:
- extension:
last-update: Sun, 19 Oct 2025 16:55:53 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: NoKubernetes-767545
name: NoKubernetes-767545
- context:
cluster: force-systemd-flag-804392
extensions:
- extension:
last-update: Sun, 19 Oct 2025 16:55:58 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: force-systemd-flag-804392
name: force-systemd-flag-804392
current-context: force-systemd-flag-804392
kind: Config
users:
- name: NoKubernetes-767545
user:
client-certificate: /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/NoKubernetes-767545/client.crt
client-key: /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/NoKubernetes-767545/client.key
- name: force-systemd-flag-804392
user:
client-certificate: /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/force-systemd-flag-804392/client.crt
client-key: /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/force-systemd-flag-804392/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-102079

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-102079"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-102079"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-102079"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-102079"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-102079"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-102079"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-102079"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-102079"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-102079"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-102079"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-102079"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-102079"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-102079"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-102079"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-102079"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-102079"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-102079"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-102079"

                                                
                                                
----------------------- debugLogs end: kubenet-102079 [took: 3.165040193s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-102079" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-102079
--- SKIP: TestNetworkPlugins/group/kubenet (3.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-102079 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-102079

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-102079

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-102079

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-102079

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-102079

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-102079

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-102079

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-102079

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-102079

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-102079

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-102079"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-102079"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-102079"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-102079

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-102079"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-102079"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-102079" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-102079" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-102079" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-102079" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-102079" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-102079" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-102079" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-102079" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-102079"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-102079"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-102079"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-102079"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-102079"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-102079

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-102079

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-102079" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-102079" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-102079

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-102079

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-102079" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-102079" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-102079" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-102079" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-102079" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-102079"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-102079"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-102079"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-102079"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-102079"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21683-3708/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 19 Oct 2025 16:55:53 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: NoKubernetes-767545
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21683-3708/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 19 Oct 2025 16:56:01 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: offline-containerd-755604
contexts:
- context:
cluster: NoKubernetes-767545
extensions:
- extension:
last-update: Sun, 19 Oct 2025 16:55:53 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: NoKubernetes-767545
name: NoKubernetes-767545
- context:
cluster: offline-containerd-755604
extensions:
- extension:
last-update: Sun, 19 Oct 2025 16:56:01 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: offline-containerd-755604
name: offline-containerd-755604
current-context: offline-containerd-755604
kind: Config
users:
- name: NoKubernetes-767545
user:
client-certificate: /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/NoKubernetes-767545/client.crt
client-key: /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/NoKubernetes-767545/client.key
- name: offline-containerd-755604
user:
client-certificate: /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/offline-containerd-755604/client.crt
client-key: /home/jenkins/minikube-integration/21683-3708/.minikube/profiles/offline-containerd-755604/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-102079

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-102079"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-102079"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-102079"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-102079"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-102079"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-102079"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-102079"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-102079"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-102079"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-102079"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-102079"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-102079"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-102079"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-102079"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-102079"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-102079"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-102079"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-102079" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-102079"

                                                
                                                
----------------------- debugLogs end: cilium-102079 [took: 4.744015801s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-102079" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-102079
--- SKIP: TestNetworkPlugins/group/cilium (4.90s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-398463" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-398463
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
Copied to clipboard